Transition to real time journeys – the time is now 

Transition to real time journeys – the time is now 

This article is contributed. See the original author and article here.

In September 2023, we announced that Dynamics 365 Customer Insights and Dynamics 365 Marketing are coming together as one offering named Dynamics 365 Customer Insights, an AI driven solution which revolutionizes your customers’ experiences.

Within this solution are two apps:

  • Customer Insights – Data (previously known as Dynamics 365 Customer Insights) that empowers you to know your customers through 360-degree profile.
  • Customer Insights – Journeys (previously known as Dynamics 365 Marketing) allows you to engage your customers with personalized experiences based on the profile.

In the same timeframe, we also announced the transition from outbound marketing to real-time. The transition to real-time is independent from product name or licensing changes.

New customer environments only include real-time journeys and event management. Existing customers, if necessary, can add outbound marketing through a self-serve interface. We will continue to support outbound marketing but will not be adding new enhancements.  We encourage all customers to transition to and use the exciting new capabilities available in real-time journeys. In this blog we cover how to plan for the transition to real-time and the resources that are available to you to help make this seamless. 

How do the changes impact me? 

If you are a new customer of the Customer Insights – Journeys app, you get real-time journeys only (including Event planning). So you start with the most current and advanced technology and avoid the time & expense of transitioning from outbound later.  

Existing customer environments using outbound marketing, show the new product name but otherwise remained unchanged.  When provisioning new, copying an existing, or upgrading a solutions-only environment to paid, outbound marketing is not installed by default.

If the system detects there is an existing environment with outbound marketing (in the same geo), then Settings > Version page shows Enable outbound link to install outbound. If you do not see the link or have issues enabling outbound, reach out to us directly as explained in the Transition overview page (see links in the resources section later).

When should I transition to Real-time? 

Though we haven’t announced a date for ending outbound support the time to transition is now! Rest assured, we will use our product telemetry data and customer feedback to provide an adequate time window to ensure all customers can plan and complete their transition before support for outbound is ended.  

But why wait? Real-time journeys offers most of the capabilities that outbound marketing has and a lot more that outbound doesn’t (and will not) such as the ability to respond and react in near-real time, high scale of 100M contacts/300M interactions in public preview (even more on the roadmap), and new & exciting capabilities with generative AI/Copilot, etc.   

graphical user interface, application

How to transition? 

You can transition all at once or gradually depending on your business needs, capabilities you use in outbound marketing, and resources availability. 

In a one-shot transition, you will recreate all your journeys, segments, and other assets in real-time journeys and then switch over to them over a short period (a few days).

The other approach is to transition gradually over time. You can create all your new campaigns in real-time journeys and leave your current campaigns running in outbound marketing until they complete. This way you build confidence and train your team gradually over time. We’ve prepared guidance on how to manage consent in hybrid/transition situations. With custom reporting capability (see release plan below), single analytics across both outbound and real-time can be created for the hybrid situation.

We know that most of your effort is usually spent in creating and finalizing emails, so we have built a tool in real-time journeys to let you Import outbound emails, templates, and content blocks so you can preserve and reuse them. You will also have a tool to help you quickly migrate consent records.

We have assembled real-time journeys transition resources to cover transition planning and tools for each major product area.  

Real-time transition capabilities

With either approach, you will want to take a stock of what capabilities of outbound marketing you currently use, how they are supported in real-time journeys, and if there is a need to transfer any data or assets from outbound marketing to real-time journeys. In the transition resources section of our product documentation area, you will find a page for each functional area that has guidance, workarounds, and roadmap for specific capabilities. If you find there are some specific capabilities in outbound marketing that you need but are not yet available in real-time journeys, be assured that we are working to add them as fast as we can. For example, we already have a published release plan for these commonly asked for features: 

We are actively working on prioritizing additional features that have been requested. These are being scheduled to be part of the next release wave: 

  • Consent – Double opt-in 
  • Segmentation – Export, Template, Email delivery status 
  • Scheduling – Send scheduling 
  • Email – Content A/B testing 
  • Journey – Branch on email deliverability status, Templates
  • Tracking – Redirection URL 
  • Analytics – Click/Geo maps, combined analytics across outbound and real-time 
  • Event planning – event portal, session capacity, reoccurring events 
  • Forms – unmapped custom fields, form prefill, update none/multiple entities on submission, leads with parent contact 

Please note that the above is not an exhaustive list. We release new updates every month. We use your feedback to revise our roadmap continuously to ensure you can transition with confidence.  

Conclusion 

A large number of customers are already using and benefiting from ease of use and scale offered by real-time. Over the next few months, we are prioritizing work to ensure transitioning to real-time journeys is easy and quick for every customer. While outbound marketing continues to be available and supported for existing customers, we strongly recommend everyone still using outbound marketing transition to real-time journeys to propel your business into the future of marketing and customer experience.

Resources

Purpose  Resources 
Product licensing and name changes  Microsoft Sales Copilot, Dynamics 365 Customer Insights, and cloud migration reshape the future of business – Microsoft Dynamics 365 Blog  

Dynamics 365 Customer Insights FAQs – Dynamics 365 Customer Insights | Microsoft Learn  

Customer Insights Pricing | Microsoft Dynamics 365 

Provisioning changes for Customer Insights – Journeys (previously Dynamics 365 Marketing)  Transition overview – Dynamics 365 Customer Insights | Microsoft Learn 
 
Real-time journeys transition FAQs – Dynamics 365 Customer Insights | Microsoft Learn 
How to plan transition to real-time  Real-time journeys transition resources – Dynamics 365 Customer Insights | Microsoft Learn 
Differences between real-time and outbound that may impact transition  Review specific pages under Functional areas overview – Dynamics 365 Customer Insights | Microsoft Learn
These pages include differences, suggested workarounds, and roadmap for closing noted differences 
Transitioning Consent management   Consent management and double opt-in transition guidance – Dynamics 365 Customer Insights | Microsoft Learn 

The post Transition to real time journeys – the time is now  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

“Copilot, help set my New Year’s goals”:  Using Viva Goals + Microsoft Copilot to make goals in 2024

“Copilot, help set my New Year’s goals”: Using Viva Goals + Microsoft Copilot to make goals in 2024

This article is contributed. See the original author and article here.

The start of a new year is often seen as a time to reflect on the past, plan for the future, and set New Year’s resolutions for ourselves. It is also a key time for business leaders to set goals to help their organizations and teams accomplish more in the new year, whether those goals are a new product release, business growth, or workplace culture improvement.


 


Furthermore, we know that simply writing down your goals is often not enough to achieve them! You also need to communicate your goals with key stakeholders, track your progress, and measure your results. This can be challenging, especially if your organization has multiple goals, competing priorities, or cross-team dependencies.


 


This is where Viva Goals and Microsoft Copilot can help.


 


Viva Goals is Microsoft’s solution for creating, managing, and tracking organizational goals. It is founded on the Objective and Key Result (OKR) framework, yet can be customized to meet other goal-setting strategies. To learn more about changing your goal terms from “Objectives and Key Results” to other frameworks or labels, visit our page on customizing terminology in Viva Goals.


 


With the content generation and summarization capabilities in Copilot in Viva Goals, creating and tracking your goals is becoming even easier.


 


Quickly create your goals with Copilot in Viva Goals


 


One challenge we frequently hear from customers is uncertainty about getting started with writing actionable, outcome-driven goals. Setting appropriate and ambitious goals can be daunting, but using Copilot can make the process easier.


 


From a quick click of the “Copilot” button in the Viva Goals app (available on Microsoft Teams or in your browser), Copilot is ready to help you generate new goals or OKRs:


 


Copilot menu.png


Copilot in Viva Goals can be accessed from the tool bar or the Copilot icon within Viva Goals.


 


Copilot in Viva Goals can help you generate goals in two different ways:


 


Generating new goals based on context you provide (ex: industry, roles, business mission)



Clicking “Help me generate new OKRs” means Copilot will help you in crafting OKRs, using the conversational interface and its repository of sample OKRs.


 




Copilot in Viva Goals will generate goals based on prompts or information you provide in the chat.


 


By asking Copilot to “Write an OKR for this year’s plans to roll out Microsoft Copilot to employees across my organization,” you may get a result like:


 


Objective: Roll out Microsoft Copilot to employees across the organization
Key Result (KR): Train 60% of our employees on the basics of taking the “Copilot for Microsoft 365” training in Viva Learning
KR: Set up all required infrastructure and hardware to support Microsoft Copilot for these employees
KR: Ensure 60% all newly hired employees have used Microsoft Copilot in their first month of onboarding


 


Note that this content is AI-generated and will change based on inputs / sample data.

Using the Copilot interface, you can ask Copilot to regenerate these OKRs, refine them (“be more conservative,” “increase the adoption rate,” etc.), or publish them to your Viva Goals instance.


 


Generating goals from a document you provide (ex: business plan, strategy paper)



Oftentimes, business leaders will already have strategy or business planning documents they have been circulating with their leadership teams. This can be a great place to get started: by uploading these strategy documents to Viva Goals , Copilot can then identify potential goals from the document and format into actionable OKRs. This capability is currently available for local .docx files, and will be expanding file types and file sources in the coming months.


 




Copilot in Viva Goals can use content from your existing documents to suggest outcome-based goals.


 


One thing to remember: using Copilot means that you, as the user, are always in control of what gets saved, published, and shared.


 


Copilot in Microsoft 365 can also be helpful in writing goals


 


For users that are not currently using Viva Goals, or are looking for suggestions on annual goals elsewhere, Copilot in M365 can be a great place to get started. Copilot in Word or in the Microsoft Copilot web experience can be great resources for creating the right goals for you and your organization. You can use prompts like “Write 3 OKRs for building a new (product/service) in the new year” or “Provide some goal suggestions for boosting employee morale” and work with Microsoft Copilot to refine these goals.


 


Furthermore, at Ignite last November (2023), we also announced that Microsoft 365 Copilot will be enhanced with Viva in early 2024. This means users will have access to Viva functionality within the Copilot for Microsoft 365 experience, including a chat experience that works across Viva data and apps to support employees, managers, and leaders. To learn more, check out the announcement from our blog in November, New ways Microsoft Copilot and Viva are transforming the employee experience.


 


Just make sure that after creating your goals, you are communicating these goals to your stakeholders and tracking your progress!


 


Summarizing your goals with Copilot


 


With Copilot, it is even easier to summarize and share your goal progress. Copilot uses context from your goal status updates and check-ins to generate summaries of your progress, making it even easier to share your current status with other teams and leadership.


 




Copilot in Viva Goals will quickly summarize your goals for easy sharing.


 


You can work with Copilot to tailor the update messages to your audience by asking the conversational AI to make the summary content more succinct, detailed, or professional. Looking to quickly share these updates with your teams, audiences or stakeholders? Use functionality within Viva Goals to broadcast your updates to email via Outlook or to post on Viva Engage with just a few clicks.


 




With the Viva Goals integration into Viva Engage, it’s easier than ever to share your team goals with your community.


 


It has never been easier to get started with setting and tracking your goals with Microsoft and Viva Goals, especially with the power of AI. Always make sure to review Copilot’s responses to make sure the suggestions and content it presents are relevant to your organization and your goals.


 


Set your 2024 Goals with Copilot today


 


Copilot in Viva Goals is available to Viva suite customers in public preview since December 2023 and will be Generally Available in early 2024. NOTE: Customers with Viva suite licenses interested in using Copilot in Viva Goals should work with their IT Admins to enable public preview of Copilot for users from their Microsoft Admin Center. To learn more about enabling Copilot in Viva Goals, please visit our Copilot in Viva Goals documentation.


 


Microsoft will also be hosting a webinar session on January 31st, 8am US-PT, for those interested in a live demo and to hear how Copilot in Viva Goals is helping address goal-setting and tracking challenges. More details available at Microsoft Virtual Event “Discovering the Power of Copilot in Viva Goals”.


 


Have feedback about Copilot in Viva Goals? Use the feedback tool in Viva Goals to let us know your thoughts.


 


From the Microsoft Viva Goals team to yours, we wish you success in achieving your goals in the new year!

Easily Manage Privileged Role Assignments in Microsoft Entra ID Using Audit Logs

Easily Manage Privileged Role Assignments in Microsoft Entra ID Using Audit Logs

This article is contributed. See the original author and article here.

One of the best practices for securing your organization’s data is to follow the principle of least privilege, which means granting users the minimum level of permissions they need to perform their tasks. Microsoft Entra ID helps you apply this principle by offering a wide range of built-in roles as well as allowing you to create custom roles and assign them to users or groups based on their responsibilities and access needs. You can also use Entra ID to review and revoke any role assignments that are no longer needed or appropriate.


 


It can be easy to lose track of role assignments if admin activities are not carefully audited and monitored. Routine checks of role assignments and generating alerts on new role assignments are one way to track and manage privileged role assignment.


 


Chances are that when a user with privileged roles is approached, they’ll say they need the role. This may be true; however, many times users will unknowingly say they need those permissions to carry out certain tasks when they could be assigned a role with lower permissions. For example, a user will be able to reset user passwords as a Global Administrator, but that does not mean they can’t do that with another role with far less permissions.


 


Defining privileged permissions


 


Privileged permissions in Entra ID can be defined as “permissions that can be used to delegate management of directory resources to other users, modify credentials, authentication or authorization policies, or access restricted data.” Entra ID roles each have a list of permissions defined to them. When an identity is granted the role, the identity also inherits the permissions defined in the role.


 


It’s important to check the permissions of these roles. The permissions defined in all built-in roles can be found here. For example, there are a few permissions that are different for the Privileged Authentication Administrator role than the Authentication Administrator role, giving the former more permissions in Entra ID. The differences between the authentication roles can be viewed here.


 


Another example of having differences between similar roles is for the end user administration roles. The differences and nuances between these roles are outlined in detail here.


 


Auditing activity


 


To decide if a user really needs a role, it’s crucial to monitor their activities and find the role with the least privilege that allows them to carry out their work. You’ll need Entra ID audit logs for this. Entra ID audit logs can either be sent to a Log Analytics Workspace or connected to a Sentinel instance.


 


There are two methods that can be used to get the events of carried out by admin accounts. The first will make use of the IdentityInfo table, which is only available in Sentinel after enabling User and Entity Behavior Analytics (UEBA). If you aren’t using UEBA in Sentinel or if you’re querying a Log Analytics Workspace, then you’ll need to use the second method in the next heading. 


 


Using Microsoft Sentinel


 


To ingest Entra ID audit logs into Microsoft Sentinel, the Microsoft Entra ID data connector must be enabled, and the Audit Logs must be ticked as seen below. 


 


timurengin_0-1704383857782.png


Figure 1 Entra ID data connector in Sentinel with Audit logs enabled 


 


The IdentityInfo table stores user information gathered by UEBA. Therefore, it also includes the Entra ID roles a user has been assigned. This makes it very simple to get a list of accounts that have been assigned privileged roles. 


 


The query below will give a unique list of activities an account has taken, as well as which roles the account has been assigned: 


 

AuditLogs 
| where TimeGenerated > ago(90d) 
| extend ActorName = iif( 
                         isnotempty(tostring(InitiatedBy["user"])),  
                         tostring(InitiatedBy["user"]["userPrincipalName"]), 
                         tostring(InitiatedBy["app"]["displayName"]) 
                     ) 
| extend ActorID = iif( 
                       isnotempty(tostring(InitiatedBy["user"])),  
                       tostring(InitiatedBy["user"]["id"]), 
                       tostring(InitiatedBy["app"]["id"]) 
                   ) 
| where isnotempty(ActorName) 
| join (IdentityInfo 
    | where TimeGenerated > ago(7d) 
    | where strlen(tostring(AssignedRoles)) > 2 
    | summarize arg_max(TimeGenerated, *) by AccountUPN 
    | project AccountObjectId, AssignedRoles) 
    on $left.ActorID == $right.AccountObjectId 
| summarize Operations = make_set(OperationName) by ActorName, ActorID, Identity, tostring(AssignedRoles) 
| extend OperationsCount = array_length(Operations) 
| project ActorName, AssignedRoles, Operations, OperationsCount, ActorID, Identity 
| sort by OperationsCount desc 

 


This will give results for all accounts that carried out tasks in Entra ID and may generate too many operations that were not privileged. To filter for specific Entra ID roles, the following query can be run where the roles are defined in a list. Three roles have been added as examples, but this list can and should be expanded to include more roles: 


 

let PrivilegedRoles = dynamic(["Global Administrator", 
                               "Security Administrator", 
                               "Compliance Administrator" 
                              ]); 
AuditLogs 
| where TimeGenerated > ago(90d) 
| extend ActorName = iif( 
                         isnotempty(tostring(InitiatedBy["user"])),  
                         tostring(InitiatedBy["user"]["userPrincipalName"]), 
                         tostring(InitiatedBy["app"]["displayName"]) 
                     ) 
| extend ActorID = iif( 
                       isnotempty(tostring(InitiatedBy["user"])),  
                       tostring(InitiatedBy["user"]["id"]), 
                       tostring(InitiatedBy["app"]["id"]) 
                   ) 
| where isnotempty(ActorName) 
| join (IdentityInfo 
    | where TimeGenerated > ago(7d) 
    | where strlen(tostring(AssignedRoles)) > 2 
    | summarize arg_max(TimeGenerated, *) by AccountUPN 
    | project AccountObjectId, AssignedRoles) 
    on $left.ActorID == $right.AccountObjectId 
| where AssignedRoles has_any (PrivilegedRoles) 
| summarize Operations = make_set(OperationName) by ActorName, ActorID, Identity, tostring(AssignedRoles) 
| extend OperationsCount = array_length(Operations) 
| project ActorName, AssignedRoles, Operations, OperationsCount, ActorID, Identity 
| sort by OperationsCount desc 

 


Once the query is run, the results will give insights into the activities performed in your Entra ID tenant and what roles those accounts have. In the example below, the top two results don’t pose any problems. However, the third row contains a user that has the Global Administrator role and has created a service principal. The permissions needed to create a service principal can be found in roles less privileged than the Global Administrator role. Therefore, this user can be given a less privileged role. To find out which role can be granted, check this list, which contains the least privileged role required to carry out specific tasks in Entra ID. 


 


timurengin_4-1704384129451.png


Figure 2 Actions taken by users in Entra ID


 


Using Log Analytics Workspace


 


timurengin_3-1704384118890.png


Figure 3 Configuring the forwarding of Entra ID Audit logs to a Log Analytics Workspace


 


To ingest Entra ID audit logs into a Log Analytics Workspace follow these steps. 


 


Because there is no table that contains the roles an identity has been granted, you’ll need to add the list of users to the query and filter them. There are multiple ways to get a list of users who have been assigned a specific Entra ID role. A quick way to do this is to go to Entra ID and then select Roles and administrators. From there, select the role and export the identities that have been assigned to it. It’s important to have the User Principal Names (UPNs) of the privileged users. You’ll need to add these UPNs, along with the roles the user has, to the query. Some examples have been given in the query itself. If the user has more than one role, then all roles must be added to the query.


 

datatable(UserPrincipalName:string, Roles:dynamic) [ 
    "admin@contoso.com", dynamic(["Global Administrator"]), 
    "admin2@contoso.com", dynamic(["Global Administrator", "Security Administrator"]), 
    "admin3@contoso.com", dynamic(["Compliance Administrator"]) 
] 
| join (AuditLogs 
        | where TimeGenerated > ago(90d) 
        | extend ActorName = iif( 
                                isnotempty(tostring(InitiatedBy["user"])),  
                                tostring(InitiatedBy["user"]["userPrincipalName"]), 
                                tostring(InitiatedBy["app"]["displayName"]) 
                            ) 
        | extend ActorID = iif( 
                            isnotempty(tostring(InitiatedBy["user"])),  
                            tostring(InitiatedBy["user"]["id"]), 
                            tostring(InitiatedBy["app"]["id"]) 
                        ) 
        | where isnotempty(ActorName) ) on $left.UserPrincipalName == $right.ActorName 
| summarize Operations = make_set(OperationName) by ActorName, ActorID, tostring(Roles) 
| extend OperationsCount = array_length(Operations) 
| project ActorName, Operations, OperationsCount, Roles, ActorID 
| sort by OperationsCount desc 

 


Once you run the query, the results will give insights into the activities performed in your Entra ID tenant by the users you have filtered for. In the example below, the top two results can cause problems. Both have the Global Administrator role, but their operations don’t necessitate to have that role. The permissions needed for these operations can be found in roles less privileged than the Global Administrator role. Therefore, these users can be given a less privileged role. To find out which role can be granted, check this list, which contains the least privileged role required to carry out specific tasks in Entra ID.


 


timurengin_5-1704384230795.png


Figure 4 Actions taken by users in Entra ID


 


If this user still requires the Global Administrator role then the Security Administrator role will become redundant as the Global Administrator contains more permissions than the Security Administrator role.


 


Conclusion


 


Keeping accounts with privileges that are not required is keeping your attack surface greater than it needs to be. By ingesting Entra ID Audit logs, you can query and identify users who have unnecessary and over-privileged roles. You can then find a suitable alternative role for them. 


 


Timur Engin


LinkedIn  Twitter  


  


 


Learn more about Microsoft Entra:   



Empower Azure Video Indexer Insights with your own models

Empower Azure Video Indexer Insights with your own models

This article is contributed. See the original author and article here.

Overview 


 


Azure Video Indexer (AVI) offers a comprehensive suite of models that extract diverse insights from the audio, transcript, and visuals of videos. Recognizing the boundless potential of AI models and the unique requirements of different domains, AVI now enables integration of custom models. This enhances video analysis, providing a seamless experience both in the user interface and through API integrations. 


 


The Bring Your Own (BYO) capability enables the process of integrating custom models. Users can provide AVI with the API for calling their model, define the input via an Azure Function, and specify the integration type. Detailed instructions are available here.


 


Demonstrating this functionality, a specific example involves the automotive industry: Users with numerous car videos can now detect various car types more effectively. Utilizing AVI’s Object Detection insight, particularly the Car class, the system has been expanded to recognize new sub-classes: Jeep and Family Car. This enhancement employs a model developed in Azure AI Vision Studio using Florence, based on a few-shots learning technique. This method, leveraging the foundational Florence Vision model, enables training for new classes with a minimal set of examples – approximately 15 images per class. 


 


The BYO capability in AVI allows users to efficiently and accurately generate new insights by building on and expanding existing insights such as object detection and tracking. Instead of starting from scratch, users can begin with a well-established list of cars that have already been detected and tracked along the video, each with a representative image. Users can then use only numerous requests for the new Florence-based model to differentiate between the cars according to their model. 


 


Note: This article is accompanied by a step-by-step code-based tutorial. Please visit the official Azure Video Indexer “Bring Your Own” Sample under the Video Indexer Samples Github Repository. 


 


High Level Design and Flow 


 


To demonstrate the usage of building customized AI pipeline, we will be using the following pipeline that leverages several key aspects of Video Indexer components and integrations:  


byo_2.png


 


1. Users employ their existing Azure Video Indexer account on Azure to index a video, either through the Azure Video Indexer Portal or the Azure Video Indexer API.


 


2. The Video Indexer account integrates with a Log Analytics workspace, enabling the publication of Audit and Events Data into a selected stream. For additional details on video index collection options, refer to: Monitor Azure Video Indexer | Microsoft Learn.


3. Indexing operation events (such as “Video Uploaded,” “Video Indexed,” and “Video Re-Indexed”) are streamed to Azure Event Hubs. Azure Event Hubs enhances the reliability and persistence of event processing and supports multiple consumers through “Consumer Groups.” 


 


4. A dedicated Azure Function, created within the customer’s Azure Subscription, activates upon receiving events from the EventHub. This function specifically waits for the “Indexing-Complete” event to process video frames based on criteria like object detection, cropped images, and insights. The compute layer then forwards selected frames to the custom model via Cognitive Services Vision API and receives the classification results. In this example it sends the crops of the representative image for each tracked car in the video. 


 


Note: The integration process involves strategic selection of video frames for analysis, leveraging AVI’s car detection and tracking capabilities, to only process representative cropped images of each tracked car in the custom model. 


 


5. The compute layer (Azure Function) then transmits the aggregated results from the custom model back to the Azure API to update the existing indexing data using the Update Video Index  API Call.


 


6. The enriched insights are subsequently displayed on the Video Indexer Portal. The ID in the custom model matches the ID in the original insights JSON. 


 


Figure 2: New Insight widget in AVI for the custom model resultsFigure 2: New Insight widget in AVI for the custom model results


 


Note: for more in-depth step-by-step tutorial accomplished with code sample, please consult the official Azure Video Indexer GitHub Sample under the “Bring-Your-Own” Section.  


 


Result Analysis 


 


The outcome is a novel insight displayed in the user interface, revealing the outcomes from the custom model. This application allowed for the detection of a new subclass of objects, enhancing the video with additional, user-specific insights. In the examples provided below, each car is distinctly classified: for instance, the white car is identified as a family car (Figure 3), whereas the red car is categorized as a jeep (Figure 4). 


 


Figure 3: Azure Video Indexer with the new custom insight for the white car classified as family car.Figure 3: Azure Video Indexer with the new custom insight for the white car classified as family car.


 


 


Figure 4: Azure Video Indexer with the new custom insight for the red car classified as family jeep.Figure 4: Azure Video Indexer with the new custom insight for the red car classified as family jeep.


 


Conclusions 


 


With only a handful of API calls to the bespoke model, the system effectively conducts a thorough analysis of every car featured in the video. This method, which involves the selective use of certain images for the custom model combined with insights from AVI, not only reduces expenses but also boosts overall efficiency. It delivers a holistic analysis tool to users, paving the way for endless customization and AI integration opportunities. 

Decoding the Dynamics: Dapr vs. Service Meshes

Decoding the Dynamics: Dapr vs. Service Meshes

This article is contributed. See the original author and article here.

Dapr and Service Meshes are more and more usual suspects in Cloud native architectures. However, I noticed that there is still some confusion about their purpose, especially because of some overlapping features. People sometimes wonder how to choose between Dapr and a Service Mesh or even if both should be enabled at the same time.


 


The purpose of this post is to highlight the differences, especially on the way they handle mTLS, as well as the impact on the application code itself. You can already find a summary about how Dapr and Service Meshes differ on the Dapr web site but the explanations are not deep enough to really understand the differences.  This blog post is an attempt to dive deeper and give you a real clue on what’s going on behind the scenes. Let me first start with what Dapr and Service Meshes have in common.


 


Things that Dapr and Service Meshes have in common



  • Secure service-to-service communication with mTLS encryption

  • Service-to-service metric collection

  • Service-to-service distributed tracing

  • Resiliency through retries


Yes, this is the exact same list as the one documented on the Dapr web site! However, I will later focus on the mTLS bits because you might think that these are equivalent, overlapping features but the way Dapr and Service Meshes enforce mTLS is not the same. I’ll show some concrete examples with Dapr and the Linkerd Service Mesh to illustrate the use cases.


 


On top of the above list, I’d add:


 



  • They both leverage the sidecar pattern, although the Istio Service Mesh is exploring the Ambient Mesh, which is sidecar free, but  the sidecar approach is still mainstream today. Here again, the role of the sidecars and what happens during the injection is completely different between Dapr and Service Meshes.

  • They both allow you to define fine-grained authorization policies

  • They both help deal with distributed architectures


 


Before diving into the meat of it, let us see how they totally differ.


 


Differences between Dapr and Service Meshes



  • Applications are Mesh-agnostic, while they must explicitly be Dapr-aware to leverage the Dapr capabilities. Dapr infuses the application code. Being Dapr-aware does not mean that you must use a specific SDK. Every programming language that has an HTTP client and/or gRPC client can benefit from the great Dapr features. However, the application must comply to some Dapr pre-requisites, as it must expose an API to initialize Dapr’s app channel. 

  • Meshes can deal with both layer-4 (TCP) and layer-7 traffic, while Dapr is focused on layer-7 only protocols such as HTTP, gRPC, AMQP, etc.

  • Meshes serve infrastructure purposes while Dapr serves application purposes

  • Meshes typically have smart load balancing algorithms

  • Meshes typically let you define dynamic routes across multiple versions of a given web site/API

  • Some meshes ship with extra OAuth validation features

  • Some meshes let you stress your applications through Chaos Engineering techniques, by injecting faults, artificial latency, etc.

  • Meshes typically incur a steep learning curve while Dapr is much smoother to learn. On the contrary, Dapr even eases the development of distributed architectures.

  • Dapr provides true service discovery, not meshes

  • Dapr is designed from the ground up to deal with distributed and microservice architectures, while meshes can help with any architecture style, but prove to be a good ally for microservices.


 


Demo material


I will reuse one demo app that I developed 4 years ago (time flies), which is a Linkerd Calculator. The below figure illustrates it:


 


calculator.png


 


Some services talking together. MathFanBoy, a console app randomly talking to the arithmetic operations, while the percentage operation also calls multiplication and division. The goal of this app was to generate traffic and show how Linkerd helped us see in near real time what’s going on. I also purposely introduced exceptions by performing divisions by zero…to also demo how Linkerd (or any other mesh) helps spot errors.  Feel free to clone the repo and try it out on your end if you want to test what is later described in this post. I have now created the exact same app, using Dapr, which is made available here.  Let us now dive into the technical details.


Diving into the technical differences


Invisible to the application code vs code awareness


As stated earlier, an application is agnostic to the fact that it is injected or not by a Service Mesh. If you look at the application code of the Linkerd Calculator, you won’t find anything related to Linkerd. The magic happens at deployment time where we annotate our K8s deployment to make sure the application gets injected by the Mesh. On the other hand, the application code of the Dapr calculator is directly impacted in multiple ways:


 


– While I could use a mere .NET Console App for the Linkerd Calculator, I had to turn MathFanBoy into a web host, to comply with the Dapr app initialization channel. However, because MathFanBoy generates activity by calling random operations, I could not just turn it as an API, so I had to run different tasks in parallel. Here are the most important bits:


 


 

class Program
    {
        static string[] endpoints = null;
        static string[] apis = new string[5] { "addition", "division", "multiplication", "substraction", "percentage" };
        static string[] operations = new string[5] { "addition/add", "division/divide", "multiplication/multiply", "substraction/substract", "percentage/percentage" };
        
        static async Task Main(string[] args)
        {
            var host = CreateHostBuilder(args).Build();

            var runHostTask = host.RunAsync();

            var loopTask = Task.Run(async () =>
            {
                while (true)
                {
                    var pos = new Random().Next(0, 5);
                    using var client = new DaprClientBuilder().Build();
                    var operation = new Operation { op1 = 10, op2 = 2 };
                    try
                    {
                         var response = await client.InvokeMethodAsync(
                         apis[pos], // The name of the Dapr application
                         operations[pos], // The method to invoke
                         operation); // The request payload                        
                        
                        Console.WriteLine(response);
                    }
                    catch(Exception ex) { 
                        Console.WriteLine(ex.ToString());
                    }
                    
                    await Task.Delay(5000);
                }
            });

            await Task.WhenAll(runHostTask, loopTask);

        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup();
            });
    }

 


Lines 9 and 10 create the web host.  Between lines 13 and 35, I generate random calls to the operations, but here again we have another difference as the application is using the Dapr Client’s InvokeMethodAsync to perform the calls. As you might have noticed, the application does not need to know the URL of these services. Dapr will discover where the services are located, thanks to its Service Discovery feature. The only thing we need to provide is the App ID and the operation that we want to call. With the Linkerd calculator, I had to know the endpoints of the target services, so they were injected through environment variables during the deployment. The same principles apply to the percentage operation, which is a true API. I had to inject the Dapr client through Dependency Injection:


 


 

public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers().AddDapr();
        }

 


 


In order to to get an instance through the controller’s constructor:


 


 

public PercentageController(ILogger logger, DaprClient dapr)
        {
            _logger = logger;
            _dapr = dapr;
        }

 


 


and use that instance to call the division and multiplication operations from within another controller operation, using again the Invoke method as for MathFanBoy. As you can see, the application code is explicitly using Dapr and must comply to some Dapr requirements. Dapr has many features other than Service Discovery but I’ll stick to that since the point is made that a Dapr-injected Application must be Dapr-aware while it is completely agnostic of a Service Mesh.


mTLS


Now things will get a bit more complicated. While both Service Meshes and Dapr implement mTLS as well as fine-grained authorization policies based on the client certificate presented by the caller to the callee, the level of protection of Dapr-injected services is not quite the same as the one from Mesh-injected services. 


 


Roughly, you might think that you end up with something like this:


linkerddaprmtls.png


 


A very comparable way of working between Dapr and Linkerd. This is correct but only to some extents. If we take the happy path, meaning every pod is injected by Linkerd or Dapr, we should end up in the above situation. However, in a K8s cluster, not every pod is injected by Dapr nor Linkerd. The typical reason why you enable mTLS is to make sure injected services are protected from the outside world. By outside world, I mean anything that is not either Dapr-injected, either Mesh-injected. However, with Dapr, nothing prevents the following situation:


 


daprbypass.png


 


The blue path is taking the Dapr route and is both encrypted and authenticated using mTLS. However, the green paths from both a Dapr-injected pod and a non-Dapr pod still goes through in plain text and anonymously. How come is that possible?


 


For the blue path, the application is going through the Dapr route ==> http://localhost:3500/ this is the port that the Daprd sidecar listens to. In that case, the sidecar will find out the location of the target and will talk to the target service’s sidecar. However, because Dapr does not intercept network calls, nothing prevents you from taking a direct route, from both a Dapr-injected pod and a non-Dapr one (green paths). So, you might end up in a situation where you enforce a strict authorization policy as shown below:


 


 

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: multiplication
  namespace: dapr-calculator
spec:
  accessControl:
    defaultAction: deny        
    trustDomain: "public"
    policies:
    - appId: mathfanboy
      defaultAction: allow
      trustDomain: 'public'
      namespace: "dapr-calculator"
    - appId: percentage
      defaultAction: allow
      trustDomain: 'public'
      namespace: "dapr-calculator"

 


 


where you only allow MathFanBoy and Percentage to call the multiplication operation, and yet have other pods bypass the Dapr sidecar, which ultimately defeats the policy itself. Make no mistake, the reason why we define such policies is to enforce a certain behavior and I don’t have peace of mind if I know that other routes are still possible.


So, in summary, Dapr’s mTLS and policies are only effective if you take the Dapr route but nothing prevents you from taking another route.


 


Let us see how this works with Linkerd. As stated on their web site, Linkerd also does not enforce mTLS by default and has added this to their backlog. However, with Linkerd (same and even easier with Istio), we can make sure that only authorized services can talk to meshed ones. So, with Linkerd, we would not end up in the same situation:


linkerd-no-bypass.png


 


First thing to notice, we simply use the service name to contact our target because there is no such Dapr route in this case nor any service discovery feature. However, because Linkerd leverages the Ambassador pattern, which intercepts all network calls coming in and going outside of a pod. Therefore, when the application container of a  Linkerd-injected pod tries to connect to another service, Linkerd’s sidecar performs the call to the target, which lands onto the other sidecar (if the target is well a Linkerd-injected service of course). In this case no issue. Of course, as for Dapr, nothing prevents us from directly calling the pod IP of the target. Yet, from an injected pod, the Linkerd sidecar will intercept that call. From a non-injected pod, there is no such outbound sidecar, but our target’s sidecar will still tackle inbound calls, so you can’t bypass it. By default, because Linkerd does not enforce mTLS, it will let it go, unless you define fine-grained authorizations as shown below:


 


 


 

apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
  namespace: rest-calculator
  name: multiplication
spec:
  podSelector:
    matchLabels:
      app: multiplication
  port: 80
  proxyProtocol: HTTP/1

---
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
  namespace: rest-calculator
  name: multiplication-from-mathfanboy
spec:
  server:
    name: multiplication
  client:
    meshTLS:
      identities:         
        - mathfanboy
        - percentage

 


 


In this case, only MathFanBoy and and Percentage will be allowed to call the multiplication operation. In other words, Linkerd allows us to enforce mTLS, whatever route is taken. With Istio, it’s even easier since you can simply enforce mTLS through the global mesh config. You do not even need to specify explicit authorization policies (although it is a best practice). Just to illustrate the above diagrams, here are some screenshots showing these routes in action:


 


I’m first calling the multiplication operation from the addition pod, while we told Dapr that only MathFanboy and Percentage could call multiplication. As you can see, the Dapr policy kicks in and forbids the call as expected.


dapr-to-dapr-access-policy.png


but while this policy is defined, I can still call the multiplication using a direct route (pod IP):


dapr-nodaprroute-to-dapr.png


and the same applies to non-injected pods of course.


 


While, with the Linkerd policy in place, there will be no way to call multiplication other than from MathFanBoy and Percentage. For sake of brevity, I won’t show you the screenshots but trust me, you will be blocked if you try.


 


Let us now focus on the injection process which will clarify what is going on behind the scenes.


Injection process Dapr vs Service Mesh


Both Dapr and Service Meshes will inject application pods according to annotations. They both have controllers in charge of injecting their victims. However, when looking at the lifecycle of a Dapr-injected pod as well as a Linkerd-injected pod, we can see noticeable differences.


When injecting Linkerd to an application, in plain Kubenet (not using the CNI plugin), we notice that Linkerd injects not only the sidecar but also an Init Container:


 


stephaneeyskens_0-1704463377022.png


 


When looking more closely at the init container, we can see that it requires a few capabilities such as NET_ADMIN and NET_RAW, and that is because the init container will rewrite IP tables to make sure network traffic entering and leaving the pod is captured by Linkerd’s sidecar. When using Linkerd together with a CNI, the same principle applies but route tables are not rewritten by the init container. No matter how you use Linkerd, all traffic is redirected to its sidecar. This means that the sidecar cannot be bypassed.


 


linkerdinit.png


 


When injecting Dapr, we see that there is no Init Container and only the daprd container (sidecar) is injected:


stephaneeyskens_1-1704463458337.png


There is no rewrite of any IP table, meaning that the sidecar can be bypassed without any problem, thus bypass Dapr routes and Dapr policies. In other words, we can easily escape the Dapr world.


Wrapping up


As stated initially, I mostly focused on the impact of Dapr or a Service Mesh on the application itself and how the overall protection given by mTLS varies according to whether you use Dapr or a Service Mesh. I hope it is clear by now that Dapr is definitely an application framework that infuses the application code, while a Service Mesh is completely transparent for the application. Note that the latter is only true when using a decent Service Mesh. By decent, I mean something stable, performant and reliable. I have been recently confronted to a Mesh that I will not name here, but this was a true nightmare for the application and it kept breaking it.


 


Although Dapr & Service Meshes seem to have overlapping features, they are not equally covering the workloads. With regards to the initial question about when to use Dapr or a Service Mesh, I would take the following elements into account:


 


– For distributed architectures that are also heavily event-driven, Dapr is a no brainer because Dapr brings many features on the table to interact with message and event brokers, as well as state stores. Yet, Service Meshes could still help measure performance, spot issues and load balance traffic by understanding protocols such as HTTP/2, gRPC, etc. Meshes would also help in the release process of the different services, splitting traffic across versions, etc. 


– For heterogeneous workloads, with a mix of APIs, self-hosted databases, self-hosted message brokers (such as Rabbit MQ), etc., I would go for Service Meshes.


– If the trigger of choosing a solution is more security-centric, I would go for a Service Mesh


– If you need to satisfy all of the above, I would combine Dapr and a Service Mesh for microservices, while using Service Mesh only for the other types of workloads. However, when combining, you must consider the following aspects:


  – Disable Dapr’s mTLS and let the Service Mesh manage this, including fine-grained authorization policies. Beware that doing so, you would loose some Dapr functionality such as defining ACLs on the components


 – Evaluate the impact on the overall performance as you would have two sidecars instead of one. From that perspective, I would not mix Istio & Dapr together, unless Istio’s performance dramatically improves over time.


– Evaluate the impact on the running costs because each sidecar will consume a certain amount of CPU and memory, which you will have to pay for.


– Assess whether your Mesh goes well with Dapr. While an application is agnostic to a mesh, Dapr is not, because Dapr also manipulates K8s objects such as K8s services, ports, etc. There might be conflicts between what the mesh is doing and what Dapr is doing. I have seen Dapr and Linkerd be used together without any issues, but I’ve also seen some Istio features being broken because of Dapr naming its ports dapr-http instead of http. I reported this problem to the Dapr team 2 years ago but they didn’t change anything.


 

Orchestrate your WFM solution with Dynamics 365 Customer Service

Orchestrate your WFM solution with Dynamics 365 Customer Service

This article is contributed. See the original author and article here.

A well-orchestrated workforce is the backbone of any successful customer service endeavor. This requires a systematic and holistic approach to Workforce Management (WFM), taking into account the diverse needs of customers, the fluctuating demands of the market, and the ever-changing nature of business operations.

WFM holds together the intricate machinery of customer support, ensuring operational efficiency and exceptional customer experiences. In a dynamic environment, where seamless interactions are critical, workforce management goes beyond the simple task of staffing and extends to the strategic alignment of resources, skills, and time.

Businesses choose a workforce management solution based on their unique challenges, such as compliance with labor laws. Microsoft understands that customer scenarios vary, and hence offers an open approach to incorporating the right WFM solutions. This gives customers unparalleled flexibility and efficiency in managing their workforce when using Dynamics 365 Customer Service.

WFM adapter from TTEC Digital for Dynamics 365 Customer Service

As a first step, Microsoft has partnered with TTEC Digital to offer an enhanced adapter that connects Dynamics 365 Customer Service with four leading WFM providers: Calabrio, Verint, NICE and Alvaria. The adapter is bidirectional, enabling seamless data transfer between the systems. It offers features such as real-time adherence reporting and historical reporting. Users can forecast demand on supported channels, namely inbound voice, SMS, email, chat and digital messaging, and staff accordingly.

With the enhanced adapter, organizations can use the schedule sync feature to seamlessly import schedules created in the WFM system directly into the agent calendar in Dynamics 365 Customer Service. This functionality empowers agents to conveniently review their daily schedules including breaks, training sessions, and other activities directly in Dynamics 365 Customer Service, eliminating the need to navigate to an external WFM system. This not only boosts individual performance but also contributes to overall team efficiency.

timeline

Currently, Schedule Sync is supported when using the adapter with Calabrio’s WFM system. Microsoft plans to expand support for other WFM providers.

Learn more about the WFM adapter from TTEC and watch a short video demonstration. Also, explore additional information such as pricing and buying options by checking out the TTEC WFM Adapter on Microsoft AppSource.

Connect any third-party WFM with Dynamics 365 Customer Service

The extensible nature of the Dynamics 365 platform gives organizations a publicly consumable Dataverse API. It offers maximum flexibility and customization for connecting WFM solutions with Dynamics 365 Customer Service.

For a detailed understanding including design architecture, entity details, and API specifications, please refer to this guide. Sample codes are available in the GitHub repository to expedite your journey.

What’s next

Microsoft is committed to an open and flexible approach to bringing more WFM adapters to Microsoft AppSource and enhancing the existing adapter from TTEC. Microsoft expects to offer continued API enhancements to support any third-party WFM connections.

Dynamics 365 Customer Service offers a native forecasting capability, currently in public preview. The feature empowers customers to predict both volume and demand for contact centers. We plan to enhance and expand on this capability with additional advancements, providing customers with more powerful tools for forecasting.

Stay tuned as Dynamics 365 Customer Service continues to evolve and deliver cutting-edge capabilities in WFM that anticipate and meet the ever-changing demands of the modern business landscape. Your journey to enhanced workforce management with Dynamics 365 Customer Service has just begun.

Learn more

To learn more about agent forecasting in Dynamics 365 Customer Service, read the documentation: Forecast agent, case, and conversation volumes in Customer Service | Microsoft Learn

The post Orchestrate your WFM solution with Dynamics 365 Customer Service appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Practice mode is now available in Microsoft Forms

Practice mode is now available in Microsoft Forms

This article is contributed. See the original author and article here.

We’re excited to announce that Forms now supports practice mode, enhancing students’ learning process by offering a new way to review, test, and reinforce their knowledge.  Practice mode is only available for quizzes. You can also try out practice mode from this template.


 


Practice modePractice mode


Instant feedback after answering each question
In practice mode, questions will be displayed one at a time. Students will promptly receive feedback after answering each question, indicating whether their answer is right or wrong.


 


Instant feedback after answering each questionInstant feedback after answering each question


Try multiple times for the correct answer
If students provide an incorrect answer, they will be given the opportunity to reconsider and make another attempt until they arrive at the correct one, allowing for immediate re-learning, and consequently strengthening their grasp of specific knowledge.


 


Try multiple times to get the correct answerTry multiple times to get the correct answer


Encouragement and autonomy during practice
Whether students answer a question correctly or not, they will receive an encouraging message, giving them a positive practice experience. And they have the autonomy to learn at their own pace. If they answer a question incorrectly, they can choose to retry, view the correct answer, or skip this question.


 


Encouragement message and other optionsEncouragement message and other options


Recap questions
Once students finish the practice, they can recap all the questions, along with the correct answers, providing a comprehensive overview to help gauge their overall performance.


 


Recap questionsRecap questions


Enter practice mode
Practice mode is only available for quizzes. You can turn it on in the “…” icon in the upper-right corner. Once you distribute the quiz recipients will automatically enter practice mode. Try out practice mode from this template now!


 


Enter practice modeEnter practice mode


 

Knowledge management best practices for Copilot ingestion

Knowledge management best practices for Copilot ingestion

This article is contributed. See the original author and article here.

Generative AI capabilities are rapidly changing the customer service space. You can use Copilot in Dynamics 365 Customer Service today to help agents save time on case and conversation summarization as these features do not require your organization’s support knowledge. However, before agents can use Copilot to answer questions and draft emails, you need to ensure Copilot is using accurate knowledge content. 

Good knowledge hygiene is key to bringing Copilot capabilities to life. For Copilot to successfully ingest, index, and surface the right knowledge asset, it’s important to ensure each asset meets defined ingestion criteria. Also, preparing knowledge assets for Copilot ingestion is not a finite process. It is essential to keep ingested knowledge assets in sync with upstream sources, and use proper curation and governance practices. 

While every organization has its own unique systems, we aim to provide a general set of best practices for creating and maintaining your Copilot corpus. We’ll cover four main topics here: 

  • Defining the business case 
  • Establishing data quality and compliance standards 
  • Understanding the content lifecycle and integrating feedback 
  • Measuring success

Defining the business case

It is imperative that you look at your organization’s goals holistically to ensure they align with the content you intend to surface. Consult with different roles in each line of business to capture the different types of content they already use or will need. Determine the purpose of each content element to ensure its function and audience are clear. Look at your organization’s common case management workflows that require knowledge to see the greatest impact on productivity. 

You may want to take a phased approach to roll out Copilot capabilities to different parts of your organization. The use case for each line of business will enable you to create a comprehensive plan that will be easier to execute as you include more agents. Administrators can create agent experience profiles to determine which groups of agents can begin using Copilot and when. 

For example, there may be some lines of business that are more adherent to your established content strategy. Consider deploying to these businesses first. This will create an opportunity to observe and account for variables within your businesses which today are under the surface. 

Establishing data quality and compliance standards 

Identify the correct combination of content measures and values before bringing content into your Copilot corpus. Careful preparation at this stage will ensure Copilot surfaces the right content to your agents.  

The following is a general list of must-haves for high-performing knowledge content: 

  • Intuitive title and description 
  • Separate sections with descriptive subheadings  
  • Use plain language and avoid technical jargon 
  • No knowledge asset attachments; convert them into individual knowledge assets 
  • No excessively long knowledge assets; break them into individual knowledge assets 
  • No broken or missing hyperlinks in the content body 
  • Descriptions for any images that appear in knowledge assets; Copilot cannot read text on images 
  • No customer or personal information about employees, vendors, etc. 
  • A review process for authoring, reviewing, and publishing articles 
  • A log of all actions related to ingesting, checking, and removing knowledge assets 

If you’re storing knowledge assets in Dataverse, they should always be the latest version and in Published status.  

Understanding the content lifecycle and integrating feedback

As mentioned above, clearly defined processes for authoring, reviewing, publishing, synchronizing, curating, and governing knowledge assets will help ensure Copilot surfaces responses based on the most recent knowledge assets. Determine which roles in your organization will author knowledge assets and the review process they will use to ensure accuracy.  

After publishing a knowledge asset, determine how your organization will gather feedback to signal when to update or deprecate the asset. Set an expiration date for each asset so you have a checkpoint at which you can determine whether to update or remove it. 

You can use the helpful response rate (HRR) to gather initial agent feedback. HRR is the number of positive (thumbs-up) ratings for each interaction divided by the total ratings (thumbs up + thumbs down). You can correlate this feedback with the knowledge assets Copilot cites in its responses. Gather more detailed feedback by creating a system that enables users to request reviews, report issues, or suggest improvements.

Measuring success 

While knowledge management is an ongoing process, so is its measurement. You’ll want to periodically track usage and performance to ensure Copilot is useful to agents and identify areas for improvement.  

Tracking analytics

First, you can measure the performance of your knowledge content based on the purpose you outlined at the beginning. You can view some metrics directly within your Customer Service environment. To view Copilot analytics, go to Customer Service historical analytics and select the Copilot tab. Here, comprehensive metrics and insights provide a holistic perspective on the value that Copilot adds to your customer service operations. 

You can also build your own Copilot interaction reports to see measurements such as number of page views for each knowledge asset, the age of the asset, and whether the agent used the cited asset. The asset age is based on the date it was ingested by Copilot, so it’s important to ensure publication and ingestion cycles align.

Serving business processes

Some other key metrics that you’ll want to consider will be more closely tied to your organization’s business processes. Some examples include:

  • Number of cases related to a knowledge article 
  • Number of escalations prevented 
  • Time saved when agents access these articles 
  • Costs saved from reduced escalations and troubleshooting time 

Overall, introducing and expanding Copilot capabilities in your CRM is an iterative and ongoing process. Include stakeholders from every role to ensure your organization is using Copilot to help solve the right problems and enhance the agent experience.  

AI solutions built responsibly

Enterprise grade data privacy at its core. Azure OpenAI offers a range of privacy features, including data encryption and secure storage. It allows users to control access to their data and provides detailed auditing and monitoring capabilities. Copilot is built on Azure OpenAI, so enterprises can rest assured that it offers the same level of data privacy and protection.  

Responsible AI by design. We are committed to creating responsible AI by design. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are putting those principles into practice across the company to develop and deploy AI that will have a positive impact on society.

Learn more

For more information, read the documentation: Use Copilot to solve customer issues | Microsoft Learn 

The post Knowledge management best practices for Copilot ingestion appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Copilot in Microsoft Word – Copilot Snack Show Me How Video

Copilot in Microsoft Word – Copilot Snack Show Me How Video

This article is contributed. See the original author and article here.

HLS Show Me HowNew.png  “One of the challenges that many employees face in their daily work is creating and managing documents efficiently and effectively. Whether it is writing a report, a proposal, a memo, or a presentation, there are often multiple tasks involved, such as researching, drafting, editing, formatting, and sharing. These tasks can take up a lot of time and energy, especially if the documents are complex, lengthy, or require collaboration with others.


Microsoft 365 Copilot in Microsoft Word is a new feature that aims to help employees streamline their document creation and management process, by providing them with smart and personalized assistance. Copilot in Microsoft Word leverages artificial intelligence and natural language processing to understand the context and intent of the user, and to offer relevant suggestions, insights, and actions. Some of the benefits of using Copilot in Microsoft Word include:


– Document generation: Copilot in Microsoft Word can help users generate high-quality content faster and easier, by suggesting text, images, tables, charts, and other elements based on the topic, style, and tone of the document. Users can also use voice commands to dictate their content, and Copilot in Microsoft Word will transcribe and format it accordingly.


– Document transformation: Copilot in Microsoft Word can help users transform their existing document content by rewriting text, making wholesale changes, transforming text into tables, and more.


– Document queries: Copilot in Microsoft Word can help users find answers to their questions and queries within their documents, by using natural language and semantic search. Users can ask Copilot in Microsoft Word to highlight, summarize, explain, or provide additional information about any term, concept, or data point in their documents, and Copilot in Microsoft Word will display the results in a sidebar or a pop-up window.


– Document management: Copilot in Microsoft Word can help users organize and manage their documents more efficiently and effectively, by suggesting tags, categories, folders, and metadata based on the content and purpose of the document. Users can also use Copilot in Microsoft Word to share, sync, and collaborate on their documents with others, by using Copilot in Microsoft Word’s integration with OneDrive, SharePoint, Teams, and Outlook.” – Microsoft 365 Copilot


In this video I walk you through document generation, document transformation, chatting with Copilot about your document, and creating a summary of your document.


Resources:



Thanks for visiting – Michael Gannotti LinkedIn | Twitter


me.jpg

What’s New in Microsoft Teams | December 2023

What’s New in Microsoft Teams | December 2023

This article is contributed. See the original author and article here.

Happy New Year! Welcome to our December 2023 update of What’s New in Microsoft Teams. This month, we are excited to showcase 49 new features and enhancements that will help you collaborate more effectively, streamline your workflow, and stay connected with your team. From the new Microsoft Teams app in VDI to improvements in chats, webinars, town halls, Teams Phone, Teams Devices, Platform, and Frontline Workers, there is something for everyone. Read on to learn more about the latest updates and how they can benefit you and your organization.


 


My personal favorites are custom channel announcement backgrounds, improved search and domain-specific search, and calling shortcuts for Teams Phone. Custom channel backgrounds allow you to generate unique images for your channels using AI removing the need to use external applications to create new images. Improved search within Chat and Channels allows you to quickly find specific threads you are looking for in long conversations or chats and the domain-specific search lets you search for specific file types or a person’s name to quickly narrow your search so you can find exactly what you need. And updated calling shortcuts for Teams Phone give you the ability to perform repetitive tasks quickly while using a Teams Phone.



And every month we highlight new devices that are certified and ready to use for Teams. You can find more devices for all types of spaces and uses at aka.ms/teamsdevices.


 


Please read about all the updates and let me know your thoughts! I’ll do my best to respond and answer questions as they come up.


 


New Teams


Chat and Collaboration


Webinars and Town Halls


Teams Phone


Teams Rooms and Devices


Platform


Collaborative Apps


Frontline Worker Solutions


Virtual Appointments


Teams for Education


 

 



New Teams


New Teams for virtual desktop infrastructure (VDI)
The new Teams app is generally available for VDI customers. You can now experience and enjoy all the benefits of the new Teams app within virtual desktops. The new Teams in VDI has feature parity with classic Teams and offers improved performance, reliability, and security.



The new Teams app will be faster in terms of overall responsiveness, such as launching the app and joining meetings while consuming less memory and disk space than classic Teams in the virtualized environment. The preview version of new Teams was up to two times faster while using 50% less memory.



Moving forward, new features and capabilities as well as enhancements to existing features will be available exclusively in new Teams. We encourage our customers to get started on their new Teams journey in the virtualized environment today.



With the new Teams app, you can use one installer for both desktop and VDI and have the option to update automatically in VDI. Learn more about upgrading to new Teams for VDI.


 


 


Chat and Collaboration


Custom channel announcement background
Channels bring people, content, and tools together to cultivate workplace knowledge, improve teamwork and co-innovate in a single place. Each channel post is as important as the other, so how do you make one stand out? Now you can create a personalized announcement background that harnesses creativity and engages teams in new ways. To create new images, type a description or use the power of AI to generate a personalized background. Creating an image using generative AI is available in Teams Premium and with Microsoft 365 Copilot license.


Custom Channel Announcement backgrounds.gif


 


Loop components in channels
Stay in the flow of your work and keep your content synced with Loop components in channels. Now when you compose a post in a channel, you can easily co-create and collaborate with Loop components like tables, lists, progress tracker and more.


Loop components in channels.png


 


An improved search experience in chat and channels
You can now benefit from an enhanced in-channel and in-chat search experience including a new “find in channel” search button that is integrated into the channel information pane. Search within a specific chat or channel to have the results displayed in the right pane of the screen. In a single view, you will be able to quickly glance at your search results as well as the channel or chat interface, without leaving your flow of work. After selecting a search result, you are taken to that specific message in the channel or chat so you can quickly gain the full context of the message rather than only have the search blurb displayed.


find in chat.png


 


find in channel.png


 


Chat button on missed call activity
Easily get in touch with your contacts after a missed call. A new chat button is now added to your activity feed, enabling you to follow up on a missed call and start a chat with just one click.


 


Files app updated to OneDrive app experience in Teams
The Files app accessed from left side of the Teams desktop client is now updated with the OneDrive app experience, bringing performance improvements, more views, and the latest features of OneDrive to both classic and new Teams. As part of this change, the Files app on left side of the new Teams desktop client is now called the OneDrive app. Learn more about the next generation of OneDrive.


OneDrive app.png


 


Domain-specific search
New domain-specific search filters like “files”, “group chats” or “teams and channels” help you narrow down search results and quickly discover the information you seek. In addition, you can now enter a stakeholder’s name, and with domain-specific search efficiently find shared files and mutual group chats.


Domain-specific search.jpg


Domain-specific search 1.png


 


Copilot in chat and channels conversation history
You can now view your past Copilot conversation history if you use Copilot in Microsoft Teams. Open the Copilot flyout and type in a question. When you close and open the Copilot flyout you will see your previous conversations with Copilot.


 


 

Webinars and Town Halls


New webinar and town hall templates added to Outlook Teams add-in
New meeting templates for webinars and town halls are available in the Teams meeting dropdown menu in the Calendar tab in Outlook with the Teams add-in enabled. This allows organizers to set up webinars and town halls directly within Outlook and these will show up on calendars in both Outlook and Teams apps.


New webinar and town hall templates added to Outlook Teams add-in.png


 


 

Teams Phone


Calling shortcuts for Teams Phone
External keyboard shortcuts help improve the efficiency of repetitive tasks and can be easier to navigate for if you have mobility or vision disabilities. Available now, updated calling shortcuts for Teams Phone help you initiate calls more intuitively while reducing the potential for error.



Windows:
Alt + Shift + A: Initiate an Audio Call
Alt + Shift + V: Start a Video Call



MacOS:
Option + Shift + A: Initiate an Audio Call
Option + Shift + V: Start a Video Call


 


Learn more about keyboard shortcuts for Microsoft Teams.


 


Group chat call confirmation
While the ability to initiate a call with all members of a group chat can be a time-saver when time is critical, the new group chat call confirmation helps reduce the likelihood of placing an accidental call. Learn more about starting a call from a chat.


Group chat call confirmation.png


 


Teams Phone Mobile now available in Norway
Teams Phone Mobile enables you to integrate business mobile calling with Teams for flexible, productive, and secure mobile communications. This solution is now available to customers in Norway via Telia.


Teams phone Mobile Coverage 1.png


 


 

Teams Rooms and Devices


Find devices that are certified for Teams for all types of spaces and uses at aka.ms/teamsdevices.



Enable People Recognition with an intelligent camera
Users will be able to enroll their face and create a face profile using the new enrollment process in the Teams desktop client. The face profile is used for meetings in a Teams Rooms with an intelligent camera capable of People Recognition is deployed, to recognize in-room attendees and label their identity for all meeting participants, both in-room and remote.


 


Automatic updates for the Teams app on Android-based Teams devices
Android-based Teams devices will receive automatic updates of the Teams app without the need for any manual intervention. Administrators can manage the automatic updates by organizing devices into update phases or pausing the rollout temporarily, if needed, from Teams Admin Center.


 


Synced updates for Microsoft Teams Rooms on Android devices and paired consoles
When Microsoft Teams Rooms on Android devices are updated from the Teams Admin Center, their paired consoles will get updated in tandem, ensuring a seamless experience.


 


Manage Microsoft Surface Hub as a Teams Rooms on Windows device
With the transition of Microsoft Surface Hub devices to the Teams Rooms on Windows platform, Admins can now manage Surface Hub devices as Teams Rooms on Windows devices in the Teams Admin Center and Teams Rooms Pro Management (available for Teams Rooms Pro license customers).


 


Enabling People Recognition in a Teams Meeting with a desktop client face enrollment process
You can enroll their face and create a face profile using the new enrollment process in the Teams desktop client. The face profile is used in Teams Rooms meetings where an Intelligent Camera capable of People Recognition is deployed to recognize in-room attendees, and then labels their identity for all meeting participants, both in-room and remote.


 


Teams Phone and Teams Rooms licenses in device store
IT administrators can already browse and purchase Certified for Teams devices in the device store in Teams Admin Center. This update will enable IT admins to try and buy Teams Phone and Teams Rooms licenses from the device store.



Logitech Sight
Sight is a certified for Teams room system accessory that pairs with Logitech Rally Bar or Rally Bar Mini to provide remote participants with a front-and-center view of in-person interactions. Sight ensures a more equitable meeting experience by seamlessly framing and presenting multiple active speakers, dynamically replacing and displaying individuals as they contribute to the conversation around the table. Discover Logitech Sight.


Logitech Sight.png


 


Logi Dock Flex
Specifically designed for hoteling, hot desking, and flex desk environments, newly certified for teams Logi Dock Flex brings together the reservation and booking experience that users are already familiar with from booking meeting rooms, combined with a simple plug-and-play docking experience. Users simply book their desk with Outlook, Microsoft Teams or ad hoc, find their desk, plug in, and get right to work. In addition, Logi Dock Flex can be easily managed through Teams Admin Center and Logitech Sync.



Learn more about the Logi Dock Flex by Logitech.


Logi Dock Flex.png


 


Neat Board 50 for Microsoft Teams
Neat Board 50, certified for Teams, is a powerful, pioneering all-in-one 50-inch touchscreen video device that’s easy to install, set up and use. Designed for the flexible future of work, it adapts to whenever, wherever and in whatever way you need to meet or express your ideas on Microsoft Teams in today’s modern workspaces. For greater freedom and accessibility, you can pair it with a unique adaptive stand, which lets you quickly move the device from space to space and adjust the screen up or down for optimal use and viewing. At the same time, the included pressure-sensitive Neat Active Marker allows you to enjoy more natural, friction-free whiteboarding. Discover the cutting-edge capabilities of Neat Board 50.


Neat Board 50.png


 


Nureva HDL410 system
Certified for Teams Rooms on Windows, the HDL410 system delivers full-room pro audio performance in extra-large spaces up to 35′ x 55′ (10.7 x 16.8 m). The solution’s powerful processors and expanded memory unlock advanced audio capabilities in Nureva’s patented Microphone Mist™ technology. The HDL410’s unified coverage map enables the physical mics from the two microphone and speaker bars to be processed together, creating a giant microphone array that spans the entire room. The result is everyone in the room and those participating remotely are heard more consistently and clearly. The HDL410 is a USB plug and play device and works seamlessly with all the other AV technology products you might already have in your hybrid spaces. Learn more about the Nureva HDL410 system.


Nureva HDL410.png


 


Lenovo ThinkSmart View Plus Monitor
The Lenovo ThinkSmart View Plus Monitor is newly Certified for Teams Peripheral mode and designed for Teams meetings and collaboration. It can be used as a secondary display for video calls or as a standalone device for chat, calendar, and personal productivity. Learn more about the Lenovo ThinkSmart View Plus Monitor.


Lenovo ThinkSmart View Plus Monitor.png


 


Poly CCX EM60 side car
Designed to plug-and-play seamlessly with the Poly CCX 505, 600, and 700 Series Microsoft Teams certified phones, the CCX EM60 is the first expansion module from Poly that is certified for Microsoft Teams. For users who need to manage multiple Teams phone calls with ease, the Poly CCX EM60 expansion module is the perfect solution. With Intuitive controls and a 5” color LCD screen, the EM60 offers up to 20-line keys across 3 pages for easy contact tracking, and the option to connect up to 3 modules for a comprehensive desk or wall communication station.


Poly CCX EM60 side car.png


 


HP 620 FHD Webcam
The newly Certified for Teams HP 620 FHD Webcam is a high-quality webcam that delivers clear and detailed video in Full HD resolution. It is perfect for video conferencing, streaming, and recording. With its wide field of view and autofocus, it captures everything in sharp detail. The built-in microphone ensures clear audio, while the plug-and-play USB connectivity makes it easy to set up and use. Learn more about HP 620 FHD Webcam.


HP 620 FHD Webcam.png


 


HP 960 4k Streaming Webcam
The HP 960 4K Streaming Webcam delivers stunning 4K Ultra HD video quality. With its advanced features, its wide field of view, autofocus, and light correction technology ensure that you always look your best on camera. The built-in stereo microphones provide crystal-clear audio, while the easy plug-and-play USB connectivity makes setup a breeze. Learn more about the HP 960 4K Streaming Webcam.


HP 960 4k Streaming Webcam.png


 


Anker PowerConf S3 Speakerphone
The Anker PowerConf S3 Speakerphone is a portable conference speaker Certified for Microsoft Teams that delivers crystal-clear audio for your meetings and calls. Its six microphones with voice-enhancing technology ensure that everyone can be heard, while the noise-cancelling technology reduces background noise. The USB-C connectivity makes it simple to use, while the long battery life ensures that you can use it for extended periods without needing to recharge. Learn more about the Anker PowerConf S3 Speakerphone.


Anker PowerConf S3 Speakerphone.png


 


Dell Wired Headset – WH3024
The Dell Wired Headset – WH3024 is a high-quality headset that delivers clear and immersive audio. With its comfortable design and noise-cancelling microphone, it ensures that you can communicate clearly and effectively. The easy plug-and-play connectivity makes it simple to use, while the in-line controls allow you to adjust the volume and mute the microphone with ease. Learn more about the Dell Wired Headset – WH3024.


Dell Wired Headset – WH3024.png


 


 

Platform


Resources-specific consent apps
Admins will be allowed to pre-approve apps using resource-specific consent (RSC) permissions, so their users can install them even when RSC is otherwise turned off.


 


Teams AI library
The Teams AI library offers developers a suite of code functionalities designed to ease the integration of Large Language Models, empowering them to build rich, conversational Teams apps. It simplifies the process of creating Bots and Message Extensions, as well as interactions with Adaptive Cards for conversational experiences. Additionally, the Teams AI library also aids the migration of existing Bots, Message Extensions, and Adaptive Card functionalities with seamless integration with Large Language Models.


 


Click-to-Chat with Teams App Publishers
Teams Admin Center users can now quickly and easily open a private Teams chat between themselves and a third-party app publisher to directly ask questions on pricing, compliance or other topics as they consider the deployment and/or purchase of the app.


 


 

Collaborative Apps


Autopilot Accounts Payable AI
Autopilot Accounts Payable AI automates the task of uploading, assigning, and tracking supplier invoices with AI in Microsoft Teams. Featuring data extraction to help quickly create accurate invoices for approval and payment, automated reminders when invoices are due, and the Autopilot Accounts Payable bot available to help with daily tasks and financial insights. With this app, you can optimize invoice management through single-platform, customized workflows.


Autopilot Accounts Payable AI.png


 


Carbon Neutral Club Inc
The new app from Carbon Neutral Club Inc brings climate education and action directly into daily work with Teams. This app delivers educational content personalized to each employee, rewards them with points every time they take climate positive action, and tracks these points with team leaderboards. Creating a space for teams to not just share climate tips, news, and education but also take on climate challenges together.


Carbon Neutral Club Inc.png


 


Datadog
Datadog is a popular monitoring and security platform for cloud applications. The Datadog app allows you to create, manage and collaborate on incidents all within Teams. The app helps organizations stay on top of IT systems by receiving alerts that include monitor tags and traces on Teams channels. Also, Datadog Workflow Automation helps integrate real-time observability data with automatic remediation. Now with Teams integration you can receive messages and act on decision points from the Teams chat interface itself!


Datadog.png


 


emSinger
The emSinger application takes the complexity out of moving documents between departments, customers, partners, suppliers, and employees. This paperless office solution enables you to quickly sign documents with globally accepted, legally valid signatures, within Microsoft Teams. Simply download the document, select your signature, and sign the document, eliminating lag in the signing and approval process.


 


Neo Agent
The Neo Agent app brings the power of AI to Managed Service Providers to streamline service desk operations. Neo helps with IT ticket management and resolution by tracking and conversationally summarizing tickets, providing predictive insights, and enhancing overall productivity and efficiency. This AI assistant tackles the key pain points of manually dealing with high volumes of IT tickets.


Neo Agent.png


 


Teamcenter
Siemens introduces a new app designed to elevate collaboration within product engineering and manufacturing through seamless integration with Teamcenter, its leading product lifecycle management (PLM) solution. This innovative tool harnesses the power of generative AI to simplify and enhance the problem-solving process. By enabling the creation and management of problem reports in multiple languages, you can efficiently address challenges within the familiar environment of Microsoft Teams. The intuitive design of Teamcenter ensures a fluid collaboration experience across various devices, empowering teams to effortlessly transform complex problems into innovative solutions. Embrace a smarter, more connected way of working with Siemens Teamcenter and unlock the full potential of collaborative product development and manufacturing.


Teamcenter.png


 


Trello
The Trello app in Teams is a popular collaboration tool used for managing projects through Kanban-style boards and cards. In the latest update, Trello has also integrated with Copilot for Microsoft 365. With this plugin in Teams, you can search for high priority or pending tasks in natural language via Copilot.


Trello 1.png


 


Zendesk
The Zendesk for Microsoft Teams integration allows you to resolve tickets faster, simplify employee workflows, and boost team performance. The latest update to this app helps simplify support workflows and collaboration with Zendesk ticket management directly in Teams. You can stay up to date with tickets and support activities with real time notifications as well as deploy Zendesk Bots to help resolve commonly asked questions.


 


 

Frontline Worker Solutions


Tasks in my area in Teams mobile
The mobile experience for Microsoft Planner app in Teams will let you filter to a specific bucket or set of buckets, so you can focus on tasks in your area or department. We expect this simple change will provide greater focus and productivity, while maintaining the familiar look and functionality of the Planner app.


 


Full name display in Shifts app
Team members’ full names are now visible in the team schedule. This helps managers quickly identify staff members by their full names, even when names are lengthy, by toggling profile pictures on/off.


 


Graph API for day notes in Shifts app
Now, managers have the flexibility to add day notes using the Graph API, in addition to the Shifts app. This powerful capability allows customers to seamlessly integrate relevant day notes from third-party or line-of-business applications, enhancing the Shifts experience.


 


Excel import enhancements in Shifts app
Frontline managers will gain the ability to import an Excel file into Shifts with time off and open shift entities. Complementary, those entities will also be supported when managers export schedules in a compatible format to import.


 


Improved Shifts privacy settings
Frontline managers can control how much shift information is visible among their frontline workers, enhancing data privacy and control (e.g. time off reason & notes, shift notes, break duration, activities, and how far back a frontline worker can view their coworkers’ schedules).


 


Import and export time off and open shifts via Excel
Frontline managers can import schedules created or generated to an Excel file into Shifts through Shifts web and desktop applications. With this, frontline managers will be able to import through Microsoft Excel the following entities: employee assigned shifts, create open shifts for their team and assign time offs to their employees.


 


Deploy Shifts at scale
Admins can centrally deploy and manage shifts for the entire frontline workforce in the Teams Admin Center. As part of the centralized deployment and management, admins can standardize Shifts settings (open shifts, swap shifts requests, offer shifts requests, time off requests and time clock), identify schedule owners, and create scheduling groups uniformly for all frontline teams at the tenant level.


 


Deploy frontline dynamic teams
Admins can deploy teams at scale for frontline workers using dynamic teams in the Teams Admin Center. Dynamic teams will automate member management to ensure your teams are always up to date with the right users as people enter, move within, or leave the organization using dynamic groups from Entra ID.


 


 

Virtual Appointments


New virtual appointment template added to Outlook Teams add-in
A new meeting template for Virtual Appointments is available in the Teams meeting dropdown menu in the Calendar tab in Outlook with the Teams add-in enabled. This allows schedulers to set up virtual appointments directly within Outlook and these will show up on calendars in both Outlook and Teams apps.


New virtual appointment template added to Outlook Teams add-in.png


 


 

Teams for Education


School Connection for parents in Microsoft Teams mobile
The School Connection app in Microsoft Teams mobile empowers parents and guardians to engage, support, and monitor their student’s learning at school. As a parent, you can stay up to date on assignments, grades, insights from the past month on digital activity, and more. This release will be generally available to all regions except countries in Europe, the Middle East and Africa; the availability for those countries will be announced at a later date.


 


New Teams for Education available on all platforms
The new Teams App is also generally available for our Education customers. Now available for desktop on Windows, Mac, Edge and Chrome browsers. Additional browser platform availability will be announced in early 2024. Upgrading to new Teams is quick and easy – no migration is required. For additional resources, including update schedules, different methods on how to update to new Teams, and any known issues, visit New Microsoft Teams for Education.