Use Application Insights to diagnose conversations in Dynamics 365 Customer Service

Use Application Insights to diagnose conversations in Dynamics 365 Customer Service

This article is contributed. See the original author and article here.

Every contact center wants to maintain system health with minimal usability disruptions to offer a delightful and seamless customer experience. Now, contact center managers can use Application Insights to get details about customer conversations and solve problems more easily. 

Application Insights, an extension of Azure Monitor, provides greater visibility into conversation-based operational telemetry in Dynamics 365 Customer Service. This helps contact center managers keep track of the application’s health across the full conversation lifecycle.  Metrics are available starting with initiation, virtual agent engagement, routing, and assignment, through to resolution. Application Insights tracks volumes, latency, scenario success, failures, and trends at scale. In addition to facilitating proactive system monitoring, it empowers developers and IT professionals to easily identify and diagnose problematic conversations. From there, they can self-remediate where applicable or get swift support.

Connect to Application Insights

This capability enables customers to establish connectivity between their Dynamics 365 Customer Service environment and Application Insights instance. Then they can subscribe to system telemetry for a core set of conversation lifecycle events across the channels they use. When these logs are available in Application Insights, users can combine them with additional data sets to build custom dashboards. 

graphical user interface, text, application
Enable Application Insights to get conversation lifecycle logs for your organization from Power Platform admin center
graphical user interface
Monitor conversation telemetry with ease and track performance through Application Insights
Create your own custom monitoring dashboards with Application Insights and other data sets

Application Insights in action 

Contoso Clothing, a retail giant in apparel, has recently launched their online shopping experience. With the approaching holiday season, they anticipate high volumes. Their workforce is prepared to provide a satisfying customer service experience using Dynamics 365 Customer Service. 

Tim is a supervisor for Contoso Clothing’s customer service division. He is responsible for the management and optimum functioning of their live chat queues. On his monitoring dashboard, Tim notices a sharp increase in conversations in the backlog, leading to longer wait times. He can see that his customer service representatives are busy with ongoing conversations. This means they are unable to receive new chats, which is leading to long wait times and low customer satisfaction. The overall conversation volumes are well within Tim’s capability, and something doesn’t seem right to him.  
 
He highlights this to Kaylee, an IT professional on his team. Kaylee has recently enabled App Insights for Contoso Clothing’s Dynamics 365 Customer Service environment to access conversation telemetry. This has been helping her monitor operational health as well as troubleshoot issues in real time. Based on Tim’s observation, she pulls up telemetry for all live chat conversations from the last few hours. Each conversation contains business events logged along with associated success or failure, duration, and associated metadata in Application Insights. 

While looking through anomalies and failures, she notices a high number of ‘customer disconnected’ events being logged repeatedly. Tracing these conversations, Tim and Kaylee determine that multiple chat conversations being created for the same customer within a short span of time. They see that customers are having to reinitiate a chat every time they navigate away from their app and come back to continue the conversation.  

Tim realizes the need to give customers the option to reconnect to a previous chat session. Being a business admin himself, can enable this through the Customer Service admin center in a few clicks. Using Application Insights data, Kaylee can set up auto-alerts for this scenario in case the problem happens again. Over the next few days, Tim and Kaylee see live chat wait times go down and customer satisfaction improve. They not only proactively detected the problem early but were also self-equipped to take the necessary steps to fix it and meet their customers’ needs. 

Learn more

To learn more, refer to Conversation diagnostics in Azure Application Insights (preview) – Power Platform | Microsoft Learn 

The post Use Application Insights to diagnose conversations in Dynamics 365 Customer Service appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Support for legacy TLS protocols and cipher suites in Azure Offerings

This article is contributed. See the original author and article here.

Overview


 


Microsoft Azure services already operate in TLS 1.2-only mode. There are a limited number of services that still allow TLS 1.0 and 1.1 to support customers with legacy needs.  For customers who use services that still support legacy protocol versions and must meet compliance requirements, we have provided instructions on how to ensure legacy protocols and cipher suites are not negotiated. For example, HDInsight provides the minSupportedTlsVersion property as part of the Resource Manager template.  This property supports three values: “1.0”, “1.1” and “1.2”, which correspond to TLS 1.0+, TLS 1.1+ and TLS 1.2+ respectively.  Customers can set the allowed minimum version for their HDInsight resource.


 


This document presents the latest information on TLS protocols and cipher suite support with links to relevant documentation for Azure Offerings.  For offerings that still allow legacy protocols to support customers with legacy needs, TLS 1.2 is still preferred.  The documentation links explain what needs to be done to ensure TLS 1.2 is preferred in all scenarios.


 


Documentation Links


 




































































































































































Azure Offering



TLS documentation



API Management



https://docs.microsoft.com/azure/api-management/api-management-howto-manage-protocols-ciphers



App Service



https://docs.microsoft.com/azure/app-service/configure-ssl-bindings


https://docs.microsoft.com/azure/app-service/deploy-staging-slots



Application Gateway



https://docs.microsoft.com/azure/application-gateway/application-gateway-ssl-policy-overview


https://docs.microsoft.com/azure/application-gateway/application-gateway-configure-ssl-policy-powershell



Azure App Service – Azure Arc



https://docs.microsoft.com/azure/app-service/configure-ssl-bindings


https://docs.microsoft.com/azure/app-service/deploy-staging-slots



Azure App Service Static Web Apps



https://docs.microsoft.com/azure/app-service/configure-ssl-bindings


https://docs.microsoft.com/azure/app-service/deploy-staging-slots



Azure Cognitive Search



https://docs.microsoft.com/azure/search/search-security-overview



Azure Cosmos DB



https://devblogs.microsoft.com/cosmosdb/tls-1-2-enforcement/



Azure Database for MariaDB



https://docs.microsoft.com/azure/mariadb/concepts-ssl-connection-security#tls-enforcement-in-azure-database-for-mariadb


https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure Database for MySQL



https://docs.microsoft.com/azure/mysql/concepts-ssl-connection-security#tls-enforcement-in-azure-database-for-mysql


https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure Database for PostgreSQL



Single Server – https://docs.microsoft.com/azure/postgresql/concepts-ssl-connection-security  


Flexible Server – https://docs.microsoft.com/azure/postgresql/flexible-server/how-to-connect-tls-ssl


https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure Front Door / Azure Front Door X



https://docs.microsoft.com/azure/frontdoor/standard-premium/faq



Azure SQL



https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure SQL Database Edge



https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure Synapse Analytics



https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Azure Web Application Firewall



https://docs.microsoft.com/azure/application-gateway/application-gateway-ssl-policy-overview


https://docs.microsoft.com/azure/application-gateway/application-gateway-configure-ssl-policy-powershell


https://docs.microsoft.com/azure/frontdoor/standard-premium/faq



Cloud Services



https://docs.microsoft.com/azure/cloud-services/applications-dont-support-tls-1-2



Common Data Service



https://docs.microsoft.com/power-platform/admin/server-cipher-tls-requirements


https://docs.microsoft.com/power-platform/important-changes-coming#tls-rsa-cipher-suites-are-deprecated



Dynamics 365 AI Customer Insights



https://docs.microsoft.com/azure/search/search-security-overview


https://docs.microsoft.com/powerapps/maker/portals/faq


https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/


https://docs.microsoft.com/azure/hdinsight/transport-layer-security


https://devblogs.microsoft.com/cosmosdb/tls-1-2-enforcement/


https://docs.microsoft.com/azure/storage/common/transport-layer-security-configure-minimum-version?tabs=portal


https://docs.microsoft.com/security/benchmark/azure/baselines/service-fabric-security-baseline#44-encrypt-all-sensitive-information-in-transit


https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/TLS%20Configuration.md



Dynamics 365 Fraud Protection



https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/



Event Grid



https://docs.microsoft.com/security/benchmark/azure/baselines/event-grid-security-baseline



Event Hubs



https://support.microsoft.com/topic/add-support-for-tls-1-1-and-tls-1-2-on-service-bus-for-windows-server-1-1-92a6cf2c-1b3f-1ea6-185a-b9ced2840fb6



Functions



https://docs.microsoft.com/azure/app-service/configure-ssl-bindings


https://docs.microsoft.com/azure/app-service/deploy-staging-slots



HDInsight



https://docs.microsoft.com/azure/hdinsight/transport-layer-security



IoT Hub



https://docs.microsoft.com/azure/iot-hub/iot-hub-tls-support



Key Vault



https://docs.microsoft.com/azure/key-vault/general/security-features#tls-and-https



Logic Apps



https://docs.microsoft.com/azure/logic-apps/logic-apps-securing-a-logic-app?tabs=azure-portal


https://docs.microsoft.com/azure/logic-apps/logic-apps-securing-a-logic-app?tabs=azure-portal



Microsoft Azure Managed Instance for Apache Cassandra



https://devblogs.microsoft.com/cosmosdb/tls-1-2-enforcement/



Microsoft Forms Pro



https://docs.microsoft.com/power-platform/important-changes-coming#tls-rsa-cipher-suites-are-deprecated


https://docs.microsoft.com/power-platform/admin/server-cipher-tls-requirements



Notification Hubs



https://support.microsoft.com/topic/add-support-for-tls-1-1-and-tls-1-2-on-service-bus-for-windows-server-1-1-92a6cf2c-1b3f-1ea6-185a-b9ced2840fb6


https://docs.microsoft.com/azure/notification-hubs/notification-hubs-tls12



Power Apps



https://docs.microsoft.com/powerapps/maker/portals/faq  


https://social.technet.microsoft.com/Forums/92811d44-1165-4da2-96e7-20dc99bdf718/can-power-query-be-updated-to-use-tls-version-12?forum=powerquery


https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/


https://docs.microsoft.com/azure/api-management/api-management-howto-manage-protocols-ciphers



Power Automate



https://docs.microsoft.com/power-platform/admin/wp-compliance-data-privacy#data-protection


https://docs.microsoft.com/powerapps/maker/portals/faq


https://social.technet.microsoft.com/Forums/92811d44-1165-4da2-96e7-20dc99bdf718/can-power-query-be-updated-to-use-tls-version-12?forum=powerquery


https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/


https://docs.microsoft.com/azure/api-management/api-management-howto-manage-protocols-ciphers


https://docs.microsoft.com/azure/logic-apps/logic-apps-securing-a-logic-app?tabs=azure-portal



Power BI



https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/



Power BI Embedded



https://azure.microsoft.com/updates/power-bi-support-for-transportlayer-security/



Service Bus



https://support.microsoft.com/topic/add-support-for-tls-1-1-and-tls-1-2-on-service-bus-for-windows-server-1-1-92a6cf2c-1b3f-1ea6-185a-b9ced2840fb6



Service Fabric



https://docs.microsoft.com/security/benchmark/azure/baselines/service-fabric-security-baseline#44-encrypt-all-sensitive-information-in-transit


https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Security/TLS%20Configuration.md



SQL Server Stretch Database



https://docs.microsoft.com/azure/azure-sql/database/connectivity-settings#minimal-tls-version



Storage



https://docs.microsoft.com/azure/storage/common/transport-layer-security-configure-minimum-version?tabs=portal


https://docs.microsoft.com/azure/import-export/


https://azure.microsoft.com/updates/afstlssupport/



VPN Gateway



https://docs.microsoft.com/azure/vpn-gateway/vpn-gateway-vpn-faq#tls1



 


 


FAQ (Frequently Asked Questions)


 


What is meant by legacy protocols?


Legacy protocols are defined as anything lower than TLS 1.2. 


 


What is meant by legacy cipher suites?


Cipher suites that were considered safe in the past but are no longer strong enough or they PFS.  While these ciphers are considered legacy, they are still supported for some backward compatibility customer scenarios.


 


What is the Microsoft preferred cipher suite order?


 For legacy purposes, Windows supports a large list of ciphers by default.  For all Microsoft Windows Server versions (2016 and higher), the following ciphers are the preferred set of cipher suites. The preferred set of cipher suites is set by Microsoft’s security policy.  It should be noted that Microsoft Windows uses the IANA (Internet Assigned Numbers Authority) cipher suite notation.  This link shows the IANA to OpenSSL mapping.  It should be noted that Microsoft Windows uses the IANA (Internet Assigned Numbers Authority) cipher suite notation.  This link shows the IANA to OpenSSL mapping.


 


TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384


TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256


TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384


TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256


TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384


TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256


TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384


TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256


 


Why is ChaCha20-Poly1305  not included in the list of approved ciphers?


ChaCha20-Poly1305 PolyChacha ciphers are supported by Windows and can be enabled in scenarios where customers control the OS. 


 


Why are CBC ciphers included in the Microsoft preferred cipher suite order?


The default Windows image includes CBC ciphers.  However, there are no known vulnerabilities related to the CBC mode cipher suites.  We have mitigations for CBC side-channel attacks.


 


Microsoft’s preferred cipher suite order for Windows includes 128-bit ciphers. Is there an increased risk with using these ciphers?


AES-128 does not introduce any practical risk but different customers may have different preferences with regard to the minimum key lengths they are willing to negotiate. Our preferred order prioritizes AES-256 over AES-128.  In addition, customers can adjust the order using the TLS Cmdlets.  There is also a group policy option detailed in this article: Prioritizing Schannel Cipher Suites – Win32 apps | Microsoft Docs.


 


Thanks for reading!

Modernize customer support with Copilot in Dynamics 365 Customer Service

Modernize customer support with Copilot in Dynamics 365 Customer Service

This article is contributed. See the original author and article here.

The year 2023 has ushered in dramatic innovations in AI, particularly regarding how businesses interact with customers. Every day, more organizations are discovering how they can empower agents to provide faster, more personalized service using next-generation AI.  

We’re excited to announce three Microsoft Copilot features now generally available in Microsoft Dynamics 365 Customer Service in October, along with the new summarization feature that was made generally available in September. Copilot provides real-time, AI-powered assistance to help customer support agents solve issues faster by relieving them from mundane tasks—such as searching and note-taking—and freeing their time for more high-value interactions with customers. Contact center managers can also use Copilot analytics to view Copilot usage and better understand how next-generation AI impacts the business. The following features are generally available to Dynamics 365 Customer Service users:

  1. Ask Copilot a question.
  2. Create intelligent email responses.
  3. Understand Copilot usage in your organization.
  4. Summarize cases and conversations with Copilot (released in September 2023).

Copilot uses knowledge and web sources that your organization specifies, and your organizational and customer data are never used to train public models.

Woman drinking coffee with laptop open.

Copilot in Microsoft Dynamics 365 and Power Platform

Copilot features are empowering marketing, sales, and customer service teams in new ways.

1. Ask Copilot a question

Whether they’re responding to customers using the phone, chat, or social media, agents can use Copilot to harness knowledge across the organization to provide quick, informative answers, similar to having an experienced coworker available to chat all day, every day. When an administrator enables the Copilot pane in the Dynamics 365 Customer Service workspace or custom apps, agents can use natural language to ask questions and find answers. Copilot searches all company resources that administrators have made available and returns an answer. Agents can check the sources that Copilot used to create a response, and they can rate responses as helpful or unhelpful. Contact center managers can then view agent feedback to see how their agents are interacting with Copilot and identify areas where sources may need to be removed or updated.

The ability to ask Copilot questions can save agents valuable time. Microsoft recently completed a study that evaluated the impact of Copilot in Dynamics 365 Customer Service on agent productivity for Microsoft Support agents providing customer care across the commercial business. They found that agents can quickly look up answers to high volume requests and avoid lengthy investigations of previously documented procedures. One of our lines of business with these characteristics has realized a 22 percent reduction in time to close cases using Copilot.

2. Create intelligent email responses

Agents who receive customer requests via email can spend valuable time researching and writing the perfect response. Now, agents can use Copilot to draft emails by selecting from predefined prompts that include common support activities such as “suggest a call,” “request more information,” “empathize with feedback,” or “resolve the customer’s problem.” Agents can also provide their own custom prompts for more complex issues. Copilot uses the context of the conversation along with case notes and the organization’s knowledge to produce a relevant, personalized email. The agent can edit and modify the text further, and then send the response to help resolve the issue quickly.  

3. Understand Copilot usage in your organization

It’s important for service managers to measure the impact and improvements as part of the change that generative AI-powered Copilot has on their operations and agent experience. Dynamics 365 Customer Service historical analytics reports provide a comprehensive view of Copilot-specific metrics and insights. Managers can see how often agents use Copilot to respond to customers, the number of agent/customer interactions that involved Copilot, the duration of conversations where Copilot plays a role, and more. They can also see the percentage of cases that agents resolved with the help of Copilot. Agents can also rate Copilot responses so managers have a better understanding of how Copilot is helping to improve customer service and the overall impact on their organization.

4. Summarize cases and conversations with Copilot

Generally available since September, the ability to summarize cases and complex, lengthy conversations using Copilot can save valuable time for agents across channels. Rather than spending hours to review notes as they wrap up a case, agents can create a case summary with a single click that highlights key information about the case, such as customer, case title, case type, subject, case description, product, and priority. In addition, agents can rely on Copilot to generate conversation summaries that capture key information such as the customer’s name, the issue or request, the steps taken so far, the case status, and any relevant facts or data. Summaries also highlight any sentiment expressed by the customer or the agent, plus action items or next steps. Generating conversation summaries on the fly is especially useful when an agent must hand off a call to another agent and quickly bring them up to speed while the customer is still on the line. This ability to connect customers with experts in complex, high-touch scenarios is helping to transform the customer service experience, reduce operational cost savings, and ensure happier customers.

Next-generation AI that is ready for enterprises

Microsoft Azure OpenAI Service offers a range of privacy features, including data encryption and secure storage. It also allows users to control access to their data and provides detailed auditing and monitoring capabilities. Microsoft Dynamics 365 is built on Azure OpenAI, so enterprises can rest assured that it offers the same level of data privacy and protection.

AI solutions built responsibly

We are committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are putting those principles into practice across the company to develop and deploy AI that will have a positive impact on society.

Learn more and try Dynamics 365 Customer Service

Learn more about how to elevate your service with AI and enable Copilot features for your support agents.

Try Dynamics 365 Customer Service for free.

The post Modernize customer support with Copilot in Dynamics 365 Customer Service appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Unlocking the Power of Spatial Data: Azure Cache for Redis Geospatial Indexing

This article is contributed. See the original author and article here.

In the digital age, spatial data management and analysis have become integral to a wide array of technical applications. From real-time tracking to location-based services and geospatial analytics, efficient handling of spatial data is pivotal in delivering high-performance solutions.


 


Azure Cache for Redis, a versatile and powerful in-memory data store, rises to this challenge with its Geospatial Indexes feature. Join us in this exploration to learn how Redis’s Geospatial Indexes are transforming the way we manage and query spatial data, catering to the needs of students, startups, AI entrepreneurs, and AI developers.


 


Introduction to Redis Geospatial Indexes


Azure Cache for Redis Geo-Positioning, or Geospatial, Indexes provide an efficient and robust approach to store and query spatial data. This feature empowers developers to associate geographic coordinates (latitude and longitude) with a unique identifier in Redis, enabling seamless spatial data storage and retrieval. With geospatial indexes, developers can effortlessly perform a variety of spatial queries, including locating objects within a specific radius, calculating distances between objects, and much more.


 


In Azure Cache for Redis, geospatial data is represented using sorted sets, where each element in the set is associated with a geospatial coordinate. These coordinates are typically represented as longitude and latitude pairs and can be stored in Redis using the GEOADD command. This command enables you to add one or multiple elements, each identified by a unique member name, to a specified geospatial key.


 


If you’re eager to explore the Azure Cache for Redis for Geo-positioning, be sure to tune in to this Open at Microsoft episode hosted by Ricky Diep, Product Marketing Manager at Microsoft and Roberto Perez, Senior Partner Solutions Architect at Redis.


 


Spatial Queries with Redis


Azure Cache for Redis equips developers with a set of commands tailored for spatial queries on geospatial data. Some of the key commands include:


GEOADD: Adds a location(s) to the geospatial set.
GEODIST: Retrieves the distance between two members.
GEOSEARCH: Retrieves location(s) by radius or by a defined geographical box.
GEOPOS: Retrieves the position of one or more members in a geospatial set.


These commands empower developers to efficiently perform spatial computations and extract valuable insights from their geospatial data.


 


Benefits of Redis Geospatial Indexes


In-Memory Performance: Azure Cache for Redis, as an in-memory database, delivers exceptional read and write speeds for geospatial data. This makes it an excellent choice for real-time applications and time-critical processes.


Flexibility and Scalability: Redis Geospatial Indexes can handle large-scale geospatial datasets with ease, offering consistent performance even as the dataset grows.


Simple Integration: Azure Cache for Redis enjoys wide support across various programming languages and frameworks, making it easy to integrate geospatial functionalities into existing applications.


High Precision and Accuracy: Redis leverages its geospatial computations and data to ensure high precision and accuracy in distance calculations.


 


Common Use Cases


Redis Geospatial Indexes find applications in a diverse range of domains, including:


Location-Based Services (LBS): Implementing location tracking and proximity-based services.
Geospatial Analytics: Analyzing location data to make informed business decisions, such as optimizing delivery routes or targeting specific demographics.
Asset Tracking: Efficiently managing and tracking assets (vehicles, shipments, etc.) in real-time.
Social Networking: Implementing features like finding nearby users or suggesting points of interest based on location.
Gaming Applications: In location-based games, Redis can be used to store and retrieve the positions of game elements, players, or events, enabling dynamic gameplay based on real-world locations.
Geofencing: Redis can help create geofences, which are virtual boundaries around specific geographical areas. By storing these geofences and the locations of mobile users or objects, you can detect when a user enters or exits a specific region and trigger corresponding actions.


 


For use cases where only geospatial data is needed, users can leverage the GeoSet command. However, if use cases require storing more than just geospatial data, they can opt for a combination of RedisJSON + RediSearch or Hash + RediSearch, both available in the Enterprise tiers, to accomplish real-time searches.


 


Conclusion


Redis Geospatial Indexes present a potent and efficient solution for storing, managing, and querying spatial data. By harnessing Azure Cache for Redis’s in-memory performance, versatile commands, and scalability, developers can craft high-performance applications with advanced spatial capabilities. Whether it’s location-based services, geospatial analytics, or real-time tracking, Redis Geospatial Indexes empower students, startups, AI entrepreneurs, and AI developers to unlock the full potential of spatial data processing.


 


Additional Resources


Blazor WebAssembly and Server: Implementing AAD OAuth 2 Delegated Flow with MSAL for Azure DevOps

Blazor WebAssembly and Server: Implementing AAD OAuth 2 Delegated Flow with MSAL for Azure DevOps

This article is contributed. See the original author and article here.





Introduction


The healthcare industry is no stranger to complex data management challenges, especially when it comes to securing sensitive information. As technology continues to evolve, healthcare professionals are increasingly turning to modern frameworks like Blazor to streamline operations and improve patient outcomes. However, as with any new technology, there are challenges to overcome. One of the biggest hurdles is implementing delegated OAuth flow, a security measure that allows users to authenticate with delegated permissions. In this blog post, we’ll explore step-by-step how Visual Studio and MSAL tools can accelerate your time to value and abstract away many of the complexities in the OAuth delegated flow for Blazor.





 


Pre-requisites



Setting up the Blazor Web Assembly Project



  1. Open Visual Studio and create a New Blazor Web Assembly project and provide the name of your project and local file path to save the solution.0.png

  2. On the Additional Information screen, select the following options:

    1. Framework: .NET 7

    2. Authentication Type: Microsoft Identity Platform

    3. Check the box for ASP.NET Core Hosted

    4. Hit Create to continue



  3. You will now be seeing the Required components window with the dotnet msidentity tool listed. Press next to continue.1.png

  4. Follow the guided authentication window to authenticate your identity to your target Azure tenant.

    1. this is so that Visual Studio is able to assume your identity to create the AAD application registrations for the Blazor Web Assembly.



  5. Once authenticated, you will see a list of owned applications for the selected tenant. If you have previously configured application registrations, you can select the respective application here. 

    1. For the purposes of this demo, we will create a new application registration for the server.

    2. Once the application is created, select the application you have created.

    3. Hit Next to proceed.2.png



  6. In the next prompt we will provide information about the target Azure DevOps service, choose the Add permissions to another API option to let Visual Studio configure the Azure DevOps downstream API.

    1. API URL – Provide your Azure DevOps organization URL (example: https://dev.azure.com/CustomerDemos-JL).

    2. Scopes – set to 499b84ac-1321-427f-aa17-267ca6975798/.default

      1. Note: this value does not change, as it is the unique GUID for Azure DevOps APIs with the default scope.



    3. Hit Next to proceed. 3.png



  7. Next, the tool will create a client secret for your newly created app registration. You can choose to save this locally (outside of the project/git scope) or copy it to manage it yourself.


    1. Note: if you choose to not save to a local file, the secret will not be accessible again and you will need to regenerate the secret through the AAD app registration portal.


  8. Afterwards, review the Summary page and selectively decide which components the tool should modify in case you have your own configuration/code already in place.

    1. For this demo, we will keep all boxes selected.

    2. Hit Finish to let the tool configure your project with the Microsoft Identity Platform!4.png




 


Test your Blazor Web Assembly Project’s Microsoft Identity Platform Connectivity


Now that the Blazor Web Assembly project is provisioned, we will quickly test the authentication capabilities with the out-of-the-box seed application.


 



  1. On the Visual Studio window after provisioning is completed, our solution will now have both the Client and Server projects in place

    1. Ensure your Server is set as your Startup Project5.png

    2. If it isn’t, you can do so by right clicking your Server Project on the Solution Explorer.

      6.png





  2. Test your OAuth configuration

    1. Run your application locally7.png

    2. On the web application, press the Log in button on the top right corner to log into your Azure DevOps organization8.png

    3. Once logged in, you should see a Hello, ! message9.png

    4. Getting to this point verifies that you are able to authenticate to Azure Active Directory, but not necessarily Azure DevOps as we have yet to configure any requests to the Azure DevOps REST APIs.




 


[Alternative Route] AAD App Registration Configuration


If you chose not to use the template-guided method of provisioning your Blazor application with MS identity, there are some steps you must take to ensure your application registration to function properly.


 



  1. Navigate to your tenant’s Active Directory > App registrations10.png

    1. Note the two application registrations – one for the Server, and another for the Client



  2. Configuring the Server app registration

    1. In order to allow your application to assume the logged-in identity’s access permissions, you must expose the access_as_user API on the application registration.

      1. to do this, select Expose an API on the toolbar and select Add a Scope 11.png

      2. For the Scope Name, ensure you provide access_as_user as well as selecting Admins and users for Who can consent?12.png

      3. Now go to the Authentication blade and select Add a platform to configure your Web platform’s API redirect13.png

        1. For when you deploy to your cloud services, the localhost will be replaced by your application’s site name but will still have the /signin-oidc path by default for redirects (can be configured within your appsettings.json)





    2. On the same page (Expose an API) select Add a client application around the bottom to add your Client app registration’s Application ID to allow for your client to call this API

      1. Save the authorized scopes for your client configuration within your Visual Studios project14.png





  3. Configuring the Client app registration 


    1. Navigate to the Authentication blade and do the same as in step 2.b but for your client’s callback URL15.png


  4. Now ensure that both your client and server’s appsettings.json in the Web Assembly project mirrors your app registration’s configurations

    1. Client app settings can be found within the wwwroot directory by default and should have the following detailsJinL99_0-1697829267063.png

       



    2. Server app settings can be found at the base tree and should look like the followingJinL99_1-1697829343094.png

       






 


 


 


 


 


 


 


 


 


 


 


Document management in Microsoft Teams using Dynamics 365 Integration – Part 1

Document management in Microsoft Teams using Dynamics 365 Integration – Part 1

This article is contributed. See the original author and article here.

In any business organization, you need to be able to communicate effectively with your clients and prospects, as well as access and manage relevant documents and data in real-time. Microsoft Teams is a powerful tool that enables seamless collaboration and communication among team members, as well as with external parties. But when it comes to file management in Microsoft Teams, there are some nuances that need to be taken care of.


 


With Microsoft Teams being built on the capabilities of SharePoint, whatever data we store in Microsoft Teams, is directly stored in SharePoint.


 


But one of the drawbacks of uploading a file in a Teams conversation is that you have no control over where it is stored. It automatically goes into the root folder of the SharePoint site that is linked to the team. If you want to move the file to a different folder, you would have to do it manually, which is time-consuming.


 


Other than this, one of the biggest use cases is of Microsoft chats. Most of the time, file sharing happens on chats. So, the question is, where do the files stored in chats go? Well, they are stored in the user’s ‘OneDrive for Business’ account rather than SharePoint.


An altogether another cloud storage!


 


This makes the search and retrieval of files complicated. Your business is losing money when your employees waste time looking for files rather than focusing on their core tasks, especially if they are sales reps or customer services reps, impacting customer experience as well.


As you can see, one of the drawbacks of using Teams independently for file management is that we have no control over our files.


 


So, you then have to manually sit and copy/move all such files to a centralized cloud storage. If you miss moving even one file, it will come to bite you back during crunch moments.


 


And as your business scales, your organization’s data also scales. And with the above challenges we discussed, your organization can face significant hurdles in communication and collaboration.


 


One solution to the above hurdle is to integrate Microsoft Dynamics 365 and Microsoft Teams and leverage their combined capabilities. Check out this article for a step-by-step guide on integrating Microsoft Teams and Microsoft Dynamics 365.


 


Suppose you are on a client call using Teams, and the client shares some updates on their order in the form of a note attachment/ directly in the chat. As you have integrated your CRM and Teams, you can add it against the respective record directly.


 


From within Teams, you can access the ‘Files’ tab of the respective CRM record and store it in the respective SharePoint site. You need to enable SharePoint Integration for this which is natively available. As there is only one SharePoint instance for each tenant, the root site is the same for your Dynamics 365 and Microsoft Teams. So even though the subsites may be different, both Dynamics 365 documents and Microsoft Teams files are stored on the same SharePoint site.


1.png


 


This way, all your files will be stored in your SharePoint and that too against their respective records. To retrieve it, you can just go to the record in CRM and access the file without having to manually go back to the conversations/chats.


 


One other way to manage files smartly is Inogic’s Attach2Dynamics, a popular file and storage management app.


 


With Attach2Dynamics, you get a button right on your CRM entity grid (custom or OOB) which lets you manage the cloud (SharePoint) documents right from within your CRM.


 


Download the app from Microsoft Commercial Marketplace and get a 15-day free trial.


2.png


 


You can drag and drop files and folders from your system directly into your SharePoint. Create new folders in your SharePoint from within CRM, rename, upload, delete, do a deep search, download a sharable copy, email directly as an attachment or as a link, and much more. You get a plethora of options with this advanced UI.


3.png


 


In this way, you can access all your files and folders stored in SharePoint right from your CRM, make updates to them as required, and do a lot more without toggling between different tabs.


 


Other than Teams, you get most of the attachments through emails. This can also create a gap in document retrieval as some documents are in SharePoint and some in your CRM emails. Also, over a period of time, these email content and attachments consume a lot of your Dynamics 365 space. This gives rise to CRM performance issues and buying additional Dynamics 365 space is also costly.


 


So, to truly centralize your file storage and take your file management to the next level, move these email attachments as well to the same SharePoint location. And how great it would be if you could automate this process?


 


With the help of Attach2Dynamics, all the email attachments that you see in your CRM timeline will get automatically moved/copied to your respective SharePoint folder. You would get a direct hyperlink to the document residing in SharePoint for ease of access as well.


4.png


 


How convenient is this real-time migrating of documents to SharePoint!


 


Get the app today!


 


No more your client files are scattered in different locations; be it Teams files, Email files, or CRM files.


 


Note: This app only works in Teams when integrated with Dynamics 365. For more details on how to best use it along with Teams integration, reach out to Inogic at crm@inogic.com.


 


It does not end with just centralizing the file storage with Teams and Microsoft Dynamics 365 integration, we also need to look at having a custom folder structure and the security aspects of these files which we will cover in our next post. Keep checking this space for it.


Streamline your document management today!


 

Operationalizing Microsoft Security Copilot to Reinvent SOC Productivity

Operationalizing Microsoft Security Copilot to Reinvent SOC Productivity

This article is contributed. See the original author and article here.

In a Security Operations Center (SOC), time to resolve and mitigate both alerts and incidents are of the highest importance. This time can mean the difference between controlled risk and impact, and large risk and impact. While our core products detect and respond at machine speed, our ongoing mission is to upskill SOC analysts and empower them to be more efficient where they’re needed to engage. To bridge this gap, we are bringing Security Copilot into our industry-leading XDR platform, Microsoft 365 Defender, which is like adding the ultimate expert SOC analyst to your team, both raising the skill bar and increasing efficiency and autonomy. In addition to skilling, we know that incident volumes continue to grow as tools get better at detecting, while SOC resources are scarce, so providing this expert assistance and helping the SOC with efficiency are equally important in alleviating these issues.


 


Security Copilot provides expert guidance and helps analysts accelerate investigations to outmaneuver adversaries at scale. It is important to recognize that not all generative AI is the same. By combining OpenAI with Microsoft’s security-specific model trained on the largest breadth and diversity of security signals in the industry–over 65 trillion to be precise. Security Copilot is built on the industry-transforming Azure OpenAI service and seamlessly embedded into the Microsoft 365 Defender analyst workflows for an intuitive experience.



Streamline SOC workflows


To work through an incident end to end, an analyst must quickly understand what happened in the environment and assess both the risk posed by the issue and the urgency required for remediation. After understanding what happened, the analyst must provide a complete response to ensure the threat is fully mitigated. Upon completion of the mitigation process, they are required to document and close the incident.



When the SOC analyst clicks into an Incident in their queue in the Microsoft 365 Defender portal, they will see a Security Copilot-generated summary of the incident. This summary provides a quick, easy to read overview of the story of the attack and the most important aspects of the incident. Our goal is to help reduce the time to ramp up on what’s happening in the SOC’s environment. By leveraging the incident summary, SOC analysts no longer need to perform an investigation to determine what is most urgent in their environment, making prioritization, understanding impact and required next steps, easy and reducing time to respond.


Figure 1: Microsoft 365 Defender portal   showing the Security Copilot-generated incident summary within the Incident pageFigure 1: Microsoft 365 Defender portal showing the Security Copilot-generated incident summary within the Incident page


After verifying the impact and priority of this incident, the analyst begins to review IOCs. The analyst knows that this type of attack usually starts with a targeted attack on an employee and ends with that employee’s credentials being available to the threat actor for the purposes of finding and using financial data. Once the user is compromised the threat actor can live off the land in the organization and has legitimate access to anything that user would normally have access to such as financial account information for the company. This type of compromise must be handled with urgency as the actor will be difficult to track after initial entry and could continue to pivot and compromise other users through the organization. If the targeted organization handles this risk with urgency, they can stop the actor before additional assets/accounts/entities are accessed.



The SOC analyst working on this incident is in the incident summary page and sees that the attack began with a password spray, or the threat actor attempting to access many users with a list of passwords using an anonymous IP address. This must be verified by the analyst to determine urgency and priority of triage. After determining the urgency of this incident, the analyst begins their investigation. In the incident summary, the analyst sees many indicators of compromise including multiple Ips, Cloud Applications, and other emails that may be related to the incident. The analyst sees that Defender disrupt suspended one compromised account and captured additional risk including phishing email identification. In an attempt to cast an organizational-level view, the analyst will investigate these IOCs to see who else interacted with or was targeted by the attacker.



The analyst pivots the hunting experience and sees see another “Security Copilot” button. This will allow the SOC analyst to ask Security Copilot to generate a KQL to review where else in the organization this IOC has been seen. For this example, we use, “Get emails that have ‘sbeavers’ as the sender or receiver in the last 10 days” to generate a KQL query. This reduces the time to produce the query and the need to look up syntax. The analyst now understands there’s another user account that was reached out by the adversary and the SOC needs to validate risk level / check the account state. The “llodbrok” account will be added to the incident by the analyst.


Figure 2: Microsoft Defender Security Portal showing the query assistant within Advanced hunting editorFigure 2: Microsoft Defender Security Portal showing the query assistant within Advanced hunting editor


After identifying all entry vectors and approached users, the SOC analyst shifts focus to the action on targets and what happened next for the incident they are investigating. This incident does not contain a PowerShell IOC, but if the analyst found PowerShell on the “llodbrok” user’s machine, the analyst would be able to click on the PowerShell-related alert and scroll down in the righthand pane to find the evidence component. After the analyst clicks on the PowerShell evidence, they see the “Analyze” button with the Copilot symbols. This will take a command line and turn it into human-readable language. Oftentimes attackers leverage scripts to perform network discovery, elevate privilege, and obfuscate behavior. This script analysis will reduce the time it takes to understand what the command does and respond quickly. Having human-readable scripts is also useful when the analyst is investigating an alert in a language, they are unfamiliar with or haven’t worked with recently.


Figure 3: Microsoft 365 Defender portal showing the Security Copilot-generated script analysis within the Incident pageFigure 3: Microsoft 365 Defender portal showing the Security Copilot-generated script analysis within the Incident page


After the analyst has confirmed these indicators of compromise are legitimate, the next step is to initiate the remediation process. To do this, the analyst navigates back to the Incident page in the Microsoft 365 Defender portal and scrolls down in the right-hand pane. Right below the incident summary, the analyst will see a set of “guided response” recommendations. The SOC analyst has verified the incident IOCs are legitimate and selects the “Classify” dropdown and the option “Compromised Account” to indicate to other analysts this was a true positive of BEC. The SOC analyst also sees ‘quick actions’ they can take to quickly remediate the compromised user’s account, selecting “Reset user password” and “Disable user in Active Directory” and “Suspend user in Microsoft Entra ID.”



In this process, the SOC analyst is assisted by Security Copilot in what actions to take based on past actions taken by the organization in response to similar alerts or incidents in the past. This will improve their abilities from day one, reducing early training requirements.


Figure 4: Microsoft 365 Defender portal showing the guided response within the Incident pageFigure 4: Microsoft 365 Defender portal showing the guided response within the Incident page


Finally, the analyst needs to send a report to their leadership. Reporting and documenting can be a challenge and time-consuming task, but with Security Copilot, the analyst can generate a post-response activity report with the click in seconds and provide partners, customers, and leadership with a clear understanding of the incident and what actions were taken.



Here is how it works: the SOC analyst selects the “Generate incident report” button in the upper right corner of the Incident page or the icon beside the ‘x’ in the side panel. This generates an incident report, that can be copied and pasted, showing the incident title, incident details, incident summary, classification, investigation actions, remediation actions (manual or automated actions from Microsoft 365 Defender or Sentinel) and follow-up actions.


Figure 5: Microsoft 365 Defender portal showing the Security Copilot-generated incident report within the Incident pageFigure 5: Microsoft 365 Defender portal showing the Security Copilot-generated incident report within the Incident page


 


What sets Security Copilot apart


As a part of this effort, our team worked side-by-side with Security Researchers to ensure that we weren’t providing just any response but providing a high-quality output. Today, we are reviewing a few key indicators to inform us of our response quality using clarity, usefulness, omissions, and inaccuracies. These measures are made up of three core areas that our team focused on: lexical analysis, semantic analysis, and human-oriented clarity analysis. There are three core areas that our team focused on: lexical analysis, semantic analysis, and human-oriented clarity analysis. The combination of core areas provides a solid foundation for understanding human comprehension, content similarity, and key insights between the data source and the output. With the help of our quality metrics, we were able to iterate on different versions of these skills and improve the overall quality of our skills by 50%.


 


Quality measurements are important to us as they help ensure we aren’t losing key insights and that all the information is well connected. Our security Researchers and Data Scientists partnered together across organizations to bring a variety of signals across our product stack, including threat intelligence signals, and a diverse range of expertise to our skills. Security copilot has obviated the necessity for labor-intensive data curation and quality assessment prior to model input.


 


You’ll be able to read more about how we performed quality validation in future posts.


 


Share your feedback with us


We could not talk about a feature without also talking about how important your feedback is to our teams. Our product teams are constantly looking for ways to improve our product experience, and listening to the voices of our customers is a crucial part of that process SOC analysts can provide feedback to Microsoft through each of the skill’s User Interface (UI) components (as shown below). Your feedback will be routed to our team and will be used to help influence the direction of products. We use this constant pulse check via your feedback, and a variety of other signals to monitor how we’re doing.


Picture6.png


Security Copilot is in Early Access now, but to sign up here to receive updates on Security Copilot and the use of AI in Security. To learn more about Microsoft Security Copilot, visit the website.


 


Getting started



  1. Analyze scripts and codes with Security Copilot in Microsoft 365 Defender | Microsoft Learn

  2. Summarize incidents with Security Copilot in Microsoft 365 Defender | Microsoft Learn

  3. Create incident reports with Security Copilot in Microsoft 365 Defender | Microsoft Learn

  4. Use guided responses with Security Copilot in Microsoft 365 Defender | Microsoft Learn

  5. Microsoft Security Copilot in advanced hunting | Microsoft Learn


 


Learning more



 

Microsoft Entra ID Beginner’s Tutorial (Azure Active Directory)

Microsoft Entra ID Beginner’s Tutorial (Azure Active Directory)

This article is contributed. See the original author and article here.

Simplify and improve security for sign-in experiences with Microsoft Entra ID, the new name for Azure Active Directory. Microsoft Entra ID is a unified identity provider to sign into your non-Microsoft services, like Google, AWS, Salesforce, and ServiceNow.


 


Main.png


See how it’s used to manage service licensing for Microsoft 365, Office 365, Enterprise Mobility + Security, and Microsoft Purview. It features unique capabilities like conditional access, passwordless authentication, Single Sign-on, and Dynamic Groups. Perform the most common day-to-day tasks, like adding and editing user accounts, options for groups and what each do, as well as managed identities, role assignment, admin units, and additional core capabilities.


 


Jeremy Chapman, director of Microsoft 365 and a long-time endpoint management and directory services admin, explains the setup and configuration.


 


 


Just one email address to remember.


 


pic1.png


 


Access ALL your work services and apps. Enhanced security with multi-factor authentication and Conditional Access. Take a tour of Microsoft Entra ID.


 


 


Go beyond password-only authentication.


 


pic2.png


 


It just isn’t safe. Choose from multiple authentication strengths — like FIDO2 keys, Windows Hello, biometric sign-in & Microsoft’s Authenticator app. See the Microsoft Entra admin center.


 


 


Single Sign-On across devices and apps.


 


pic3.png


 


Microsoft Entra ID integrates with device management. Get started.


 


 


Watch our video here:


 


 


 



 



QUICK LINKS:


 


00:00 — Simplify identity management 
01:05 — Consolidate identity services
02:52 — Admin experience 
05:09 — Conditional Access 
05:39 — Manage user accounts 
07:09 — Edit users 
08:16 — Dynamic Groups 
10:22 — Admin Roles & Admin Units 
11:45 — Single Sign-On 
12:34 — Wrap up


 


 


Link References


 


For more information, check out https://aka.ms/EntraDocs


 


 


Unfamiliar with Microsoft Mechanics?


 


As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


 


Keep getting this insider knowledge, join us on social:


 











Video Transcript:


 


-Imagine being able to use the same sign-in credentials to securely access all of your online services for work, not only the ones hosted by Microsoft, but even other cloud apps and service providers just using your work email address and without needing to remember your passwords. Well, all of that is possible with Microsoft Entra ID. As a common identity and access management solution, its primary job is to help you prove you are who you say you are. And once that’s verified, which is a process called authentication, you can access services that you have permissions to use, which we refer to as authorization. 


 


-So today, I’m going to walk you through all the fundamentals of Microsoft Entra ID, what it is and how it works. First, as a user to access services even from non-Microsoft clouds, like Google, Salesforce, AWS, and others. Then if you’re an identity admin, I’ll walk through the basics with a focus on users, groups, and roles. And the good news is if you’re familiar with Azure Active Directory, Microsoft Entra ID is its new name. And while there are a few new updates, it’s going to look pretty familiar. 


 


-So let’s start by looking at why you would even consolidate identity services into a single provider. And there are really quite a few reasons. First, it’s not easy to remember all the different logins that you use to access multiple apps and services. And related to that, the reality is many people will reuse their username and password across different services. 


 


-So when one of those services gets hacked and leaks your credentials, without you even knowing it, adversaries will use those leaked credentials to access other services. And what if you’re one of the responsible ones, and you don’t reuse passwords or you make a point of setting up second factor of authentication whenever possible? Well, that’s one step better from a security point of view, but for the organizations you work for, it would still mean that they need to manage each service that you’re accessing separately, for everything from account creation, changes associated with your identity, password resets, and more. 


 


-So if you could just have one username and a unified system to log into all your work services, where it’s more secure with two factors of authentication, works with passwordless login so you don’t need to remember multiple passwords, just your email address. It assesses sign-in risk in real-time. Like if someone from another country has stolen your credentials and is trying to use your account, so it can block them. You can get to all of your assigned web or line of business apps from one central location instead of managing this yourself with lots of browser bookmarks and favorites. And for IT and your help desk, all of this can be managed in one place. Doesn’t that sound like a better option? And that’s what Microsoft Entra ID is all about. Multi-cloud identity and access management, enabling secure access to your work applications and protecting your identity, which then in turn helps protect the information and services you use. 


 


-Now let’s switch gears to the identity admin experience and a few important things you should know about before you get started. These will become prerequisites and dependencies as you work with core capabilities. So I’ll start in the Microsoft Entra Admin Center. You can get to it by navigating to entra.microsoft.com. By the way, for Microsoft Cloud services like Microsoft 365 or Intune, an instance of Microsoft Entra is set up behind the scenes for your organization automatically. And even though the same information is presented in these different admin experiences, you can make changes in any of these locations to the same shared backend service. 


 


-For today though, I’ll keep things simple and I’ll do everything from the Microsoft Entra Admin Center. First, and as I mentioned before, with things like Google, Salesforce, and AWS services, you can manage identities for non-Microsoft services in addition to those offered by Microsoft. In enterprise applications, you can see that my environment has quite a few of these already set up. In most cases, there is a one-time operation to set each of these up where you’ll configure Microsoft Entra ID as the identity provider for that app or service, its integration details, and which users or groups can access it. 


 


-Next, if you currently have an on-premises directory service like Active Directory, you can configure it within hybrid management to work directly with Microsoft Entra ID to synchronize services from basic topologies to even more advanced ones. Then of course, as shown and mentioned, you’ll use Microsoft Entra to manage identities. Now these can be users, they can also be devices, then groups that can consist of users, devices, and managed identities. And these managed identities can include applications or other resources like a cloud-hosted virtual machine. 


 


-In protections, you’ll find authentication methods, which you’ll want to use for multifactor authentication. That’s because password-only authentication is not safe or recommended and Microsoft Entra ID makes it simple to standardize on more secure passwordless multifactor sign-ins. And Microsoft Entra supports multiple authentication methods, including biometric sign-in options with Windows Hello for Business, FIDO2 security keys, as well as mobile phones with the Authenticator app, along with other options that go beyond basic authentication using just passwords. 


 


-And another major benefit of Microsoft Entra ID is its ability to assess risk in real-time using Conditional Access. So here, we base access decisions on user risk level, the IP location, where the sign-in attempt is coming from, whether the device trying to sign in is compliant, and the applications. After that, as you sign into those services, conditional access can decide to allow, block, or require additional authentication strength based on the controls that you set for granting access. So now you know a few of the core capabilities. 


 


-Let’s look at a few of the basics that you’ll need to know when running the service on a day-to-day basis. And then once you have an instance of Microsoft Entra ID running, the most common tasks you’ll have is to manage user accounts. So here, you can see that I already have a few users added, but I’ll add another to show you how that process works. And immediately, you’ll see that I have options for users both internal to my organization and external to my organization. 


 


-When you get started, you’ll typically want to add internal users as members of your organization. The user principle name, often referred to as a UPN, is normally the same as an email address and you can use whatever standard construct you have in place. So I’ll use first initial and last name. The display name then is usually the fully spelled out first and last name. And even though ultimately, this account will be used with passwordless multifactor authentication later, we’ll let the system generate a password. Then in properties, you’ll input all the user’s details, and these are important to fill in because you’ll need them later for filtering and dynamic grouping that I’ll show you in a moment. 


 


-So now I have all their details inputted. Then next in assignments, I can manually add this user account to an existing group. So I’ll do that here. And the same is true for adding roles, as I scroll down this list of built-in roles, you’ll see they can be pretty specialized with lots of administrator roles. Now for many user types, you won’t need to define a role. You can add them later if you want to, but for my case, I’ll just close this out and I’ll create the user account. And now we have our new user, and what’s often just as common for managing users is editing them. 


 


-So I’m going to click into this user account. Right on the top, you’ll find some of the most common tasks for editing properties, deleting the account, resetting the password, or revoking the sessions that the selected user is currently logged into. And this will come in handy if a user, say, reports a lost or stolen device. On the left, you’ll find the applications that each user has assigned to them. Importantly, Microsoft Entra ID is often also used for license assignment with Microsoft services. And here, you can see the top level products. 


 


-And if I click into assignments, you can even control access to lots of the underlying apps and services within each of those top level product plans. This allows you to curate exactly which app experiences users have access to, so it’s not all or nothing. Then in devices, you can see which devices and the details for each device that this user has joined to Microsoft Entra. And for each user account, you can access a full set of audit logs with different events related to their identity, as well as detailed sign-in logs to see which apps they’ve recently signed into, along with their locations. Okay, so now with our users configured, let’s dig into how you’d group them together using groups. These can comprise of users, other groups, devices, and also managed identities. 


 


-In fact, here, you can see a few different groups and types spanning Microsoft 365, distribution, and security groups. These are all based on roles, devices, locations, and more. So I’ll create a new group, and you’ll see that these can be security groups, or Microsoft 365 groups. And I’ll explain what each one of them does and we’ll start with security groups. So you’ll see from these controls that security groups are simply a logical grouping of objects in the directory. As I click into members, you’ll also see these can be users, other groups, devices, and enterprise applications. And that’s it. 


 


-Conversely though, if I back out of the process and start a Microsoft 365 group, you’ll see the difference here is that it provisions a shared set of resources, like a shared inbox, and calendar in Exchange as indicated here. And behind the scenes, it’s also creating a SharePoint document library along with a few other Microsoft 365 resources. Then for member types, this time, you’ll only see users which can be people or things like meeting rooms. And something else that you can set up for both users and devices are Dynamic Groups. 


 


-Now, pay attention as I change the membership type here from assigned, where you or others will manually assign members as is indicated at the bottom, to dynamic in this case. And you’ll see that members down below just change to add dynamic query. Now this is super useful because it will automatically enroll, or conversely unenroll users or devices into groups based on their individual properties. In this case, I want to group everyone from the city where the value equals, and then I’ll type Bellevue and save it. Now go ahead and name my group Bellevue Users and hit create. And that takes a moment to provision the group and its underlying services. Then if I open up the group, you’ll see that in members, it’s already found and added three people already working in the city of Bellevue automatically. So now let’s move into something a bit more admin-focused and how you and your fellow admins can manage resources using admin roles.


 


-So I’m going to move into roles and admins. And if you’re familiar with the concept of role-based access control, or RBAC, this is how you can right-size admin level permissions to only the things that you need to access. Of course, it’s a huge risk if you just give everyone global admin rights, especially if you have a larger IT team. So these roles can pinpoint permissions based on the resources that each admin needs to manage. So now if I jump back over to a user like Christie here, in assigned roles, I can add one, and now she can perform that function. So now let’s talk about admin units, which are another way to restrict permissions in a role, similar to an organizational unit, if you’re familiar with Active Directory, for example, to certain departments, regions, or other segments in your organization. 


 


-Let show you an example. So here, I’m going to create a new admin unit. Now I’ll give it a name, Help Desk. And this restricted management control is important because it means the tenant level admins won’t simply inherit this role if you don’t want them to. Then I’ll assign roles, and I’ll pick a Teams administrator in this case, which will allow these users that I’ll pick next to manage Microsoft Teams settings. So now I’ll pick a few people working as Microsoft Teams admins. And from there, I can create it. Again, just those people that I defined have access to manage the Teams service. And one more component I’ll touch on today is how Microsoft Entra integrates with device management. 


 


-So as I mentioned before, device state can be used to assess sign-in risk in real-time with Conditional Access. And it also works to enable single sign-on with something called Microsoft Entra join, so that as you sign into your device running Windows, and now even macOS, that single sign-on can transfer to local and web apps you use to access work resources. You can enable this from device settings, and importantly, require multi-factor authentication be used to register or join devices with Microsoft Entra. 


 


-And by the way, all of this works seamlessly with Microsoft Intune and other endpoint management tools as you use those to manage the broader tasks of device management from provisioning, to app distribution, and device configuration. 


 


-So those are a few of the core concepts to manage users, groups, applications, and devices. Now to learn more, check out aka.ms/EntraDocs. And keep following Microsoft Mechanics for latest tech updates. And thanks for watching.


 




Microsoft Teams Premium: The smart place to work is also a smart investment

Microsoft Teams Premium: The smart place to work is also a smart investment

This article is contributed. See the original author and article here.

In a 2023 commissioned Total Economic Impact™ study, Forrester Consulting found that Teams Premium helped organizations save time, improve security, and reduce costs—all contributing to a projected return on investment (ROI) of 108 to 360 percent over three years. Today, we’re excited to share new advanced collaboration capabilities that help you meet the challenges of the new way of work.

The post Microsoft Teams Premium: The smart place to work is also a smart investment appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Known issue: Incorrect count for onboarded Microsoft Defender for Endpoint devices report

Known issue: Incorrect count for onboarded Microsoft Defender for Endpoint devices report

This article is contributed. See the original author and article here.

We were recently alerted to an issue where devices onboarded to Microsoft Defender for Endpoint are not properly reflected in the Microsoft Intune admin center report for devices with/without the Defender for Endpoint sensor. We’ve identified a bug that is causing incorrect counts for the number of devices onboarded to Defender for Endpoint and are working on a fix that is expected to be released later this year. The report is located under Endpoint security > Microsoft Defender for Endpoint and on the connector page.


 


Note: This is a reporting bug only. It does not impact onboarding to Defender for Endpoint.


 


A screenshot highlighting the Devices with/without Microsoft Defender for Endpoint sensor report in Endpoint security.A screenshot highlighting the Devices with/without Microsoft Defender for Endpoint sensor report in Endpoint security.


 


Temporary workaround



As a temporary workaround, check for the Defender for Endpoint onboarding status in the Antivirus agent status report under Reports > Microsoft Defender Antivirus. Look for the columns for MDE Onboarding Status and MDE Sense Running State for more information.


 


We’ll update this post when the fix has rolled out or as more information becomes available. If you have any questions, let us know though comments on this post, or by tagging @IntuneSuppTeam on Twitter.