This article is contributed. See the original author and article here.
Now more than ever organizations need a single, integrated experience that makes working together easier and more engaging for their employees whether they are all in the same room, remote, or—for many of us—a mix of both. Because of this, Microsoft is happy to announce changes to our Microsoft Teams Rooms licensing model.
This article is contributed. See the original author and article here.
Organizations are looking for ways to offer hyper-personalized service and foster deeper relationships with their customers. Customers want to connect to agents who understand their needs. They feel more comfortable talking to an agent who has served them in the past, knows about their issues, and can resolve them quickly. Agents want to serve their customers, meet service-level agreements (SLAs), and receive high satisfaction ratings. The key to achieving all these goals is a good relationship between agents and customers. Preferred agent routing in Microsoft Dynamics 365 Customer Service can help build those relationships.
Introducing preferred agent routing in Customer Service
With the preferred agent routing feature in Dynamics 365 Customer Service, organizations can offer relationship management to their customers. Once set up, the unified routing system will connect the customers to their corresponding preferred agents, thus ensuring a delightful service experience.
Administrators can associate up to three agents with a contact to help ensure that if a customer’s most preferred agent isn’t available, they can still be connected to an agent who is familiar with their past interactions and preferences.
Fallbacks ensure no customer goes unattended
In case no preferred agent is available, fallback options ensure that a customer will never be left unattended. Administrators can choose one of two fallback options:
Next best agent based on assignment logic. The system will try to find an available agent based on the assignment rules already in place. If no preferred agent is found or is available, then the work item is assigned to other available agents, ensuring that the customer is always served. We suggest this option for live chat conversations and voice channel calls.
No one. Let the work remain unassigned in the queue. A supervisor or available agent must manually pick up the work item. We suggest this option for asynchronous or social channels, where the customer doesn’t directly connect to a live agent.
Preferred agent routing in action: Contoso Coffee’s Gold membership plan
To see how your organization can benefit from preferred agent routing, consider the following scenario based on the illustration above.
Contoso Coffee recently started offering a Gold membership plan. Gold membership comes with premium customer service, which includes a dedicated relationship manager (preferred agent).
Madison calls the support line to report a problem with her old coffee machine. She has such a pleasant interaction with the support agent, Malik, that she buys a Contoso Caf 100 and enrolls in the Gold membership plan.
When Kayla, the customer success manager, gets the new subscription information, she reviews Madison’s customer service history. Noticing the positive sentiment in her interaction with Malik, she associates Malik as Madison’s preferred agent.
A few months later, Madison buys a bag of coffee beans and receives the wrong product. She initiates a chat to get a replacement. The chat is automatically routed to Malik, Madison’s preferred agent. Knowing her purchase history and remembering that she had praised Arabica coffee beans in their previous interaction, Malik suggests replacing the incorrect product with Contoso’s new Arabica beans, recommended for the Caf 100.
Madison feels happy and satisfied with the personalized support she received, without having to explain her issue to different agents. Malik feels happy to receive a high satisfaction rating from Madison.
Learn more
Check out the documentation for more information about automatic assignment and preferred agent routing in Dynamics 365 Customer Service:
This article is contributed. See the original author and article here.
Introduction
For-each loop is a common construct in many languages, allowing collections of items to be processed the same way. Within Logic Apps, for-each loops offer a way to process each item of a collection in parallel – using a fan-out / fan-in approach that, in theory, speeds up the execution of a workflow.
But for-each loop in Logic Apps, in both Consumption and Standard skus, has been a performance pain point because of how they are executed under the hood. To provide resiliency and distributed execution at scale, performance end up being sacrificed, as a lot of execution metadata must be stored and retrieved from Azure Storage, which adds network and I/O latency, plus extra compute cost, thanks to serialization/deserialization.
With stateless workflows, an opportunity to improve the performance of for-each loop arises. As stateless workflows run in-memory and are treated as atomic items, the resiliency and distribution requirements can be removed – this removes a dependency on Azure Storage to store the state of each action, which removes both I/O and networking latency, while also removing most of the serialization/deserialization costs.
The original for-each loop code was shared between stateful and stateless workflows. But as performance on stateless was not scaling we rethought the way we execute for-each loop in the context of a Stateless workflow. Those changes almost doubled the performance of for-each loop within the context of stateless, as we were able to achieve a 91% speedup in our benchmark scenario.
Benchmark
In order to compare the performance of a stateless workflow before and after the code changes, we used a familiar scenario – which we used for our previous performance benchmark blogpost, modifying it slightly to process a batch of messages using a for-each loop instead of the split-on technique we used previously. You can find the modified workflow definitions here.
We deployed the same workflow to two Logic Apps – one which used the previous binaries and another running the optimized binaries – and used a payload of 100 messages in an array delivered by a single POST request. The for-each loop turned this into 100 iterations, executing with a concurrency limit of 10. We then measured the time it took for each Logic App to complete all 100 iterations. The total execution time can be found below:
Batch Execution (100 messages)
Total execution time (seconds)
Previous binaries
30.91
Optimized binaries
16.24
As per results above we confirmed that the optimized app took 47.6% less time to execute, which means 90.7% execution speedup.
Analysis
Two interesting graphs are that of execution delay and jobs per second.
Execution delay is the time difference between when a job was scheduled to be executed and when it was actually executed. Lower is better.
Red = unoptimized
Blue = optimized
From the graph, we see that the unoptimized execution experienced spikes in executing delay. This was due to a synchronization mechanism that we used to wait for a distributed batches of for-each repetition to complete. We were able to optimize that delay away.
Jobs per second is another metric that we looked at because under the hood, all workflows are translated into a sequence of jobs. Higher is better.
Red = unoptimized
Blue = optimized
We can see that the optimized version remains higher and steadier, meaning that compute resources were more efficiently utilized.
What about Stateful workflows?
As Stateful workflows still run in a distributed manner like Consumption workflows, this optimization is not directly applicable. To maximize the performance of a for-each loop in Stateful, the most important factor is to make sure the app is scaling out enough to handle the workload.
One of the important recommendations is when joining a large table (Fact) with a much smaller table (Dimension), is to mention the small table first:
Customers | join kind=rightouter FactSales on CustomerKey
It is also recommended in such cases to add a hint:
Customers | join hint.strategy=broadcast kind=rightouter FactSales on CustomerKey
The hint allows the join to be fully distributed using much less memory.
Joins in PBI
When you create a relationship between two tables from Kusto, and both tables use Direct Query, PBI will generate a join between the two tables.
The join will be structured in this way:
FactSales | join Customers on CustomerKey
This is exactly the opposite of the recommended way mentioned above.
Depending on the size of the fact table, such a join can be slow, use a lot of memory or fail completely.
Recommended strategies until now
In blogs and presentations, we recommended few workarounds.
I’m mentioning it here because I would like to encourage you to revisit your working PBI reports and reimplement the joins based on the new behavior that was just released.
Import the dimension tables
If the dimension tables are imported, no joins will be used.
Any filters based on a dimension table will be implemented as a where based on the column that is the base of the relationship. In many cases this where statement will be very long:
| where CustomerKey in (123,456,678,…) .
If you filter on the gender F and there are 10,00 customers with F in the gender column, the list will include 10,000 keys.
This is not optimal an in extreme cases it may fail.
Join the table in a Kusto function and use the function in PBI
This solution will have good performance, but it requires more understanding of KQL and is different from the way normal PBI tables behave
Join the tables on ingestion using an update policy
Same as the previous method but requires even a deeper understanding of Kusto.
New behavior
Starting from the September version of PBI desktop, the Kusto connector was updated, and you can use relationships between dimension tables and fact table like with any other source with a few changes:
In every table that is considered a dimension add IsDimension=true to source step so it will look like
The data volume of the dimension tables (The columns you will use) should not exceed 10 megabytes.
Relationships between two tables from Kusto will be identified by PBI as M:M. You can leave it as M:M but be sure to set the filtering direction to single from the dimension to the fact.
When the relationship is M:M, the join kind will be inner. If you want a rightouter join (because you are not sure you have full integrity) you need to force the relationship to be 1:1. You can edit the model using Tabular editor (V2 is enough)
Before and after
In the attached example you can see two dimension tables with relationships to a fact table. The relationships are M:M and you can see the generated KQL in the text box.
The query as shown takes 0.5 seconds of CPU and uses 8MB of memory
The same query without the new setting takes 1.5 seconds of CPU and 478MB of memory.
This article is contributed. See the original author and article here.
The first problem we hear from customers moving to Azure Data Factory (ADF), who have been using SQL Server Integration Services (SSIS) to get their Project Online OData, is that the authentication and authorization is not straightforward. There isn’t a simple choice to login to Project Online, so you have to make a call to get a token which can then be used in the REST calls to OData. The following post steps through the process. I’m not going deep into the details of ADF and won’t cover all the steps of making an App Registration – there are plenty of resources out there, and this concentrates on the authentication then pulls in some Project level data. It gets more complicated obviously when you also want tasks and assignments, but the same approaches used with SSIS will work just as well in ADF.
TL;DR – if you know all about ADF and Project Online and App Registrations and just want the auth piece – jump to the M365Login section – just about halfway down, or just take a look at https://github.com/LunchWithaLens/adf which has definitions for the whole pipeline.
What you will need:
An App Registration in Azure Active Directory that allows you to read the Project reporting data. You will need your Tenant ID and also the Client ID and registered secret of the App Registration
The require App Registration Settings
A user account that just needs Access to Project Server reporting service. You will need the account name and password. The authentication will use the Resource Owner Password Credential (ROPC). This method of authentication is not recommended when other approaches are available (see Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials ) but as there is no “app-only” authentication options for Project Online this is one such occasion when this is the only way. To ensure this is as secure as possible we will be storing the username and password in Azure Key Vault (AKV).
Minimum user settings for the account (although they don’t need Team Member)
In this example they are also a team member, but that is not necessary.
An Azure Data Factory resource in Azure
Somewhere to write the data. In this example I cover both saving out as Json to blob storage in Azure, as well as saving to SQL Server (in this case hosted in Azure. You will need connection strings for whatever storage you are using
If using SQL Server you will need stored procedures that will do the data handling – more details later
Once you have all these pieces in place, we can continue with ADF to:
Add Linked Services
Add Datasets
Build a pipeline
Linked Services
We need 4 linked services
An Azure Key Vault where we will be storing our account details and App Registration secret
A REST linked service – basically our OData endpoint
Azure Blob Storage (not necessary – but I found it useful in debugging before I added it all into SQL Server)
SQL Server
To keep this blog relatively short, I’m not going into all the details of setting up AKV, just that using a managed identity makes it fairly easy to use in AFD.
The REST linked literally just needs the base URL configured – and this will be the URL for your PWA instance’s OData feed, along with any select options to limit the returned fields. As an example, I used:
This limited the columns returned to just those I needed. The authentication type was left as anonymous as I was handling this latter with a bearer token.
The Azure Blog storage isn’t a necessity – if you want to use one then easy to configure but I won’t go into the full details here. Ping me in the comments if you can’t find good resources to help.
Finally the SQL Server, and mine was a database I was already using for something else to which I just added a couple of tables and sprocs. In an earlier attempt I’d configured a more expensive SQL Server instance than I’d realised – and blown through my monthly allowance… The SQL Server linked service allows easy connectivity to an AKV to get the connection string – for a secure configuration.
Datasets
The datasets match up to 3 of the linked services. My “RestResource1” to link to my REST, my “ProjectTable” to match up to my SQL database and a specific table, and my “json1” that I use to connect to my blob storage to save a file. Again, configuring these I leave as an exercise for the reader :) , but the GitHub repo has definitions for all of these so you can see how they hang together. The pipeline will help them make more sense too – which comes next.
The Pipeline
To help visualize where we are headed, first we can look at the final short pipeline:
The full end-to-end pipeline
The first column of activities is reading the required data from AKV. The names should make it obvious what the data is, the username and password, the ClientId and secret for the app registration, then finally the scope for the authentication call. This isn’t strictly a ‘secret’ but I put in in the AKV as it helps when demonstrating (or recording) the solution to be able to show the values. Exposing the scope is no big deal and avoids having to redact stuff in any recording I do.
The only part defined for these activities are the settings – and the scope one is a good example:
Example KeyVault settings
The most interesting step, and maybe the only one you are interested in, is the one I called M365Login – and that is just my name – there isn’t a special activity, it is just a web activity. The settings for this one are as follows:
Web call settings to get token
The URL is of the form https://login.microsoftonline.com/<tenantid>/oauth2/v2.0/token and the method is POST and the headers configured as shown above with Content-Type application/x-www-form-urlencoded, Accept */* and Connection keep-alive. The Body is the key part – and is using the concatenation function and brings in the values from the previous calls to AKV. The full form looks something like the following, where I have used specific names for my AKV activities – yours may vary.
Basically it is using the output.value property of the previous steps to complete the “grant_type” body needed for an ROPC call.
I then use a Set variable action to take the response and keep the token for later use.
Variable setting for token
The full string used in the Value is @activity(‘M365Login’).output.access_token
Now I have my token I can use that to make my REST call to Project Online’s OData endpoint using a Copy data activity. First I use a Stored procedure activity to clear out my staging table. Take a look at the GitHub for more details, but it is just a ‘delete from’ call.
The copy data activity has a source and sink (destination) and I use one to read and then write to blob storage, then another to read and write to SQL. I’ll concentrate on the second, which has Source settings configured like this:
Source data settings
The source dataset is my REST dataset, I add the header Authorization with a Value of
@concat(‘Bearer ‘,variables(‘token’))
which gets the token from my variable called token, and I have also set the Pagination rulesRFC5988 with a Value True (although that isn’t in the above screenshot.
The Sink settings are as follows:
Sink data settings
with the sink dataset as my SQL dataset ‘ProjectsTable’. The magic happens on the Mappings tab – and I had created a table that matched the columns I was returning from REST – so just a 1:1 mapping. You can get more adventurous here if you need to do anything fancy:
Data mapping from OData to my SQL table
Once that is complete, we have a populated Project staging table with the current projects read from OData. The final steps are then just 3 stored procedure steps that remove deleted projects from the live project table (by deleting if they do not now exist in staging). also deleting any projects that have been updated (the modified date is newer in the staging table) and then finally copying in the updated and new plans from staging to the live table.
As mentioned, this is just the basics and only looks at Projects – but the main focus here was the authentication steps of getting the token with ROPC, then using the token in the REST call.
I appreciate I have glossed over a lot of the detail here so happy to fill in some of the gaps if required in the comments section or another blog if needed. However, if you know ADF and already use SSIS – the authentication piece was probably all you came for.
Recent Comments