This article is contributed. See the original author and article here.
The past couple years have been challenging for many customer service organizations. Meeting customers’ rising expectations and adapting to their evolving needs within a volatile economic landscape has been a herculean effort. Businesses are investing more in measuring the impact of their customer service, and the investment is paying dividends. In fact, it’s becoming more about the total customer experience and transforming the business models to create effortless experiences for both customers as well as employees. By 2026, Gartner predicts that 60 percent of large enterprises will use total experience to transform their business models to achieve world-class customer and employee advocacy levels.1 This growing acknowledgment is a positive indicator that service is finally recognized as a core business value driver as support teams became pivotal in retaining customer loyalty and winning new customers throughout the COVID-19 pandemic.
But are service organizations ready to face challenges and equipped to provide this level of service at scale? Customer service leaders are turning to modernizing technology and digitization, such as Microsoft Dynamics 365 Customer Service, to provide high-quality customer service experiences.
Five trends to modernize customer service
1. Recognize and quickly connect with customers
Modern customer service means showing up for your customers on the channel of their choice. Customers increasingly expect companies to offer a robust service experience right in their favorite channel. With each social channel representing diverse customer segments, engaging across these channels offers countless opportunities to deliver excellent service.
63 percent of customers expect companies to offer customer service via their social media channels, and 90 percent of social media users have already used social media as a way to communicate with a brand or business.2
The Omnichannel for Customer Service add-in for Microsoft Dynamics 365 Customer Service offers a wide variety of social engagement options for customers to engage on their preferred channel. Now with 2022 release wave 2, we’ve added Apple Messages for Business to our list of social messaging apps. This rich messaging can be used to generate interactive content and experiences that all take place within the messages application. And remember, when you enable agents to respond using the customer’s channel of choice, it drives brand engagement, creates a positive experience, and builds customer loyalty.
When engaging on social, always remember that you are not interacting with just one customer issue, but a wider audience that may not have context around a customer’s challenge. With an increasing number of eyes on you, it’s crucial to respond with precision, empathy, and quality.
2. Help customers help themselves with self-service
Intelligent self-service empowers customers to conveniently secure answers when and how they want, and this online, anytime support option is growing in popularity with customers. Business-to-business (B2B) and business-to-consumer (B2C) customers are likely to search for an answer to an issue within a knowledge base, online community, or portal before reaching out to a customer support agent. These critical self-service capabilities free up your agents to focus on high-priority, complex issues, and drive customer satisfaction.
In a McKinsey survey of customer care leaders, nearly two-thirds of respondents who successfully decreased their call volumes identified improved self-service as a key driver.3
AI-powered chatbots are leading the charge in intelligent self service. Bots serve as the first point of contact for customers, alleviate customer frustrations from long wait times, and provide around-the-clock, immediate online support.
With the right service solution, bots can be deployed as conversational interactive voice responses (IVRs) equipped with natural language processing. A direct benefit of this intelligent, human-like conversation experience, paired with around-the-clock availability, is increased resolution speed. A quick response to a problem can be the difference in keeping a customer or having them pivot to a competitor.
By 2027, chatbots will become the primary customer service channel for roughly a quarter of organizations, according to Gartner.4
AI and machine learning advancements continue to make bots even more powerful and more efficient in understanding and conversing with customers. The use of bots in customer service is likely to expand as the technology becomes more and more democratized.
3. Personalize and drive brand loyalty
Customers want businesses to quickly recognize them as individuals and tailor their customer support experiences. Personalization isn’t only about knowing your customers and their histories, it means being able to understand and accordingly respond to their sentiment in real time. The ideal solution creates interactions based on each customer’s profile, which uses their current and past interactions and user data to customize the experience. It can be as simple as greeting a customer by their name and pulling up their order automatically by an email address or phone number, or it can mean taking the first step and implementing proactive customer care.
This trend in personalizing customer service demonstrates the value of the relationship to the customer, enhancing CSAT and strengthening loyalty, while significantly impacting the company’s bottom line.
63 percent of consumers expect personalization as a standard of service and believe they are recognized as an individual when sent special offers.5
80 percent of customers are more likely to make a purchase when businesses provide a customized experience.6
Improving Customer Satisfaction with AI
Learn how AI is helping customer service leaders drive higher customer satisfaction scores.
In the past two years, customer service leaders have seen a dramatic shift in the number of employees working from homeup to 85% of their workforces in some cases.7
With agents working from more places than ever, they need new ways to find experts who can help them solve customer challenges. The right data at the right time is key to empowering agents to meet customers’ needs quickly and accurately. However, as omnichannel customer profiles grow, agents need tools that proactively surface insights that matter in the moment.
Advancements in AI technology, especially natural language understanding, enable real-time analysis of conversations and the ability to surface real-time insights and knowledge. Agents can be alerted to similar cases and successful resolution steps, along with knowledge suggestions customized for the current context. All of these capabilities help agents solve customer issues more quickly, improving resolution rates and customer satisfaction.
5. Optimize with automation and run your business lean
Many customer service leaders are making it a priority to transform their departments from cost centers to growth centers. At the same time, they face continuous pressure to keep costs down.
Unifying tools onto a single, cloud-based platform reduces redundancy and enables cost flexibility to meet changing business conditions. If that platform has an open architecture and no-code/low-code development capabilities, the time and cost of development can be dramatically reducedputting innovation within reach across the organization.
Building tomorrow
These are just a few of the trends in customer service, but one thing is for sure: customer service is evolving rapidly. The real challenge is being able to serve customers wherever they are and making sure every customer interaction is captured and available in a single profile for the support agent to use in resolving customer issues.
This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.
Watch this demo on how Microsoft brings all 5 of these customer service trends together.
Microsoft is listening. We’re continually adapting and innovating to provide service leaders with the tools you need to consistently deliver exceptional customer service. We continue to invest in features like self-service, bots, social engagement, and agent productivity tools that bring value to you, your customer service organization, and most of all, to your customers. Our goal is to build intuitive, sophisticated, easy-to-use tools that enhance your service delivery and empower your service representatives to exceed customer and organizational expectations.
Our service product roadmap is filled with new features and innovations that will take your service experience to the next level, differentiate your brand, and ultimately help you rise as a leader in your industry.
This article is contributed. See the original author and article here.
The idea on this blog post came from an issue opened by an user on the Windows Containers GitHub Repo. I thought the problem faced by the user should be common enough that others might be interested in a solution.
Get-Credential cmdlet pop-up
If you use PowerShell, you most likely came across the Get-Credential cmdlet at some point. It’s extremely useful for situations on which you want to set a username and password to be used in a script, variable, etc.. However, the way Get-Credential works is by providing a pop-up window for you to enter the credentials:
On a traditional Windows environment, this is totally fine as the pop-up window shows up, you enter the username and password, and save the information. However, on a Windows container there’s no place to display the pop-up window:
As you can see on the image above, the command hangs waiting for the confirmation, but nothing happens as the pop-up is not being displayed. Even typing CRTL+C doesn’t work. In my case, I had to close the PowerShell window, which left the container in an exited state.
Changing the Get-Credential behavior
To work around this issue, you can change the PowerShell policy to accept credential input from the console session. Here’s the script for that workaround:
The next time you use the Get-Credential cmdlet, it will ask for the username and password on the console session:
On the example above, I simply entered the username and password for the Get-Credential cmdlet. You could, obviously, save that on a variable for later use.
While this workaround solves the problem of not being able to use the Get-Credential cmdlet on Windows containers, it’s obviously not ideal. The information from the product team is that they are looking into making this the default option for Windows containers in the future – although, no timelines are available at this moment.
I hope this is useful to you! Let us know in the comments!
Exporting a database we need to review some considerations in terms of storage:
If you are exporting to blob storage, the maximum size of a BACPAC file is 200 GB. To archive a larger BACPAC file, export to local storage with SqlPackage.
Exporting a BACPAC file to Azure premium storage using the methods discussed in this article is not supported.
Storage behind a firewall is currently not supported.
Immutable storage is currently not supported.
Storage file name or the input value for StorageURI should be fewer than 128 characters long and cannot end with ‘.’ and cannot contain special characters like a space character or ‘,*,%,&,:,,/,?’.
Trying to perform this operation you may received an error message: Database export error Failed to export the database: . ErrorCode: undefined ErrorMessage: undefined-
This article is contributed. See the original author and article here.
We used to have situations where our customer needs to export 2 TB of data using SQLPackage in Azure SQL Database. Exporting this amount of data might take time and following we would like to share with you some best practices for this specific scenario.
If you’re exporting from General Purpose Managed Instance (remote storage), you can increase remote storage database files to improve IO performance and speed up the export.
Temporarily increase your compute size.
Limit usage of database during export (like in Transactional consistency scenario consider using dedicated copy of the database to perform the export operation)
Use a Virtual Machine in Azure with Accelerated Networking in Azure and in the same region of the database.
Use as a folder destination and temporal file with a enough capacity and SSD to improve the exported file performance and multiple temporary files created.
Consider using a clustered index with non-null values on all large tables. With clustered index, export can be parallelized, hence much more efficient. Without clustered indexes, export service needs to perform table scan on entire tables in order to export them, and this can lead to time-outs after 6-12 hours for very large tables.
This article is contributed. See the original author and article here.
How to Use TSSv2 to Collect Data and Analyze to Solve High CPU Issues.
Hello everyone, this is Denzel Maxey with the Windows Performance Team. I found a tool that actively collects different data based on scenarios and streamlines the data collection process. Drumroll – introducing TSSv2 (Troubleshooting Support Script). In my job, I see a lot of High CPU cases and collecting an ETL trace using TSSv2 with Xperf aka WPR for high CPU has been fundamental in resolving issues.
I’d like to share some instructions, methods, and just insight on the tools in general that should be able to empower IT professionals resolve issues. This post will show how the TSSv2 tool can work with the Windows Performance Recorder. Tssv2 also works with several tools as it is very powerful but will focus on collecting a WPR trace using TSSv2 when regarding a case of High CPU. I can even give you a great clue as to how to collect data for Intermittent High CPU cases as well! Once you have the data, I’ll then show you how to analyze it. Lastly, I’ll provide some additional resources on WPA Analysis for High CPU.
Data Collection Tools:
TSSv2
TSSv2 (TroubleShootingScript Version 2) is a code signed, PowerShell based Tool and Framework for rapid flexible data collection with a goal to resolve customer support cases in the most efficient and secure way. TSSv2 offers an extensible framework for developers and engineers to incorporate their specific tracing scenarios.
WPR/Xperf
“Windows Performance Recorder (WPR) is a performance recording tool that is based on Event Tracing for Windows (ETW). The command line version is built into Windows 10 and later (Server 2016 and later). It records system and application events that you can then analyze by using Windows Performance Analyzer (WPA). You can use WPR together with Windows Performance Analyzer (WPA) to investigate particular areas of performance and to gain an overall understanding of resource consumption.”
*Xperf is strictly a command line tool, and it can be used interchangeably with the WPR tool.*
You notice your server or device is running with 90% high CPU. Your users are complaining of latency and poor performance. You have checked task manager, resource monitor or even downloaded and opened process explorer but there is still no exact root resource glaring you in the face. No worries, a WPR will break down the high CPU processes a bit more. You could even skip straight to this step in the future when you get comfortable working with these tools.
Setup TSSv2
Running a TSSv2 troubleshooting script with the parameters for either WPR or Xperf gathers some granular Performance data on machines showing the issue. In the example below, I’m saving the TSSv2 the script to D: (note the default data location is c:MS_Data). In your web browser, download TSSv2.zip found here: http://aka.ms/getTSSv2 or you can Open an Administrative PowerShell Prompt and paste the following commands.
The commands below will automatically prepare the machine to run TSSv2 by taking the following actions in the given order:
Create D:TSSv2 folder
Set the PowerShell script execution policy to RemoteSigned for the Process level (process level changes only affect the current PowerShell window)
Set TLS type to 1.2 and download the TSSv2 zip file from Microsoft
Expand the TSSv2.zip file into D:TSSv2 folder
Change to D:TSSv2 folder
Ex: Commands used below
md D:TSSv2
Set-ExecutionPolicy -scope Process -ExecutionPolicy RemoteSigned -Force
Open an elevated PowerShell window (or start PowerShell with elevated privileges) and change the directory to this folder:
cd D:TSSv2
*WARNING* Data collection grows rather large quickly. You should have at least 30% of your overall RAM available as hard drive space. (Example, if you have 8 GB of RAM – then the file can grow to 2.5GB or larger in c:MS_Data.)
What are some of the scenarios you might have? Maybe you want to manually collect the trace. Or, once you start the trace, let it automatically stop.
How about limiting the file size? There are several parameters you can adjust for your needs.
Below you will find variations of using TSSv2 to collect WPR data in high CPU occurrences. You have an option of using either WPR or Xperf commands. Please review all of them before deciding which trace to take for your environment.
Scenario In State: The issue is currently occurring, and the following example needs user intervention to stop the trace. The WPR can grow to 80% of the memory with the example commands listed below. .Tssv2.ps1 -WPR General *** (run it for 60 seconds to no longer than 3 minutes) .Tssv2.ps1 -Xperf CPU ***(run it for 60 seconds to no longer than 3 minutes)
Default location of saved data will be C:MS_Data.
The prompt will tell you when to reproduce the issue, just simply enter “Y” will END the trace at that time and the machine in question experiencing high CPU will then finish running the data collection.
2. Scenario In State but you wanted to limit the size and length of time: The issue is currently occurring, the following example does NOT need user intervention to stop the trace. Default location of saved data will be C:MS_Data. The Xperf can grow to 4GB of memory and runs for 5 minutes with the setting below:
The issue is currently occurring; the following example does NOT need user intervention to stop the trace. Default location of saved data will be C:MS_Data. The Xperf can grow to 4GB of memory and runs for 5 minutes with the setting below:
.TSSv2.ps1 -Xperf CPU -XperfMaxFileMB 4096 -StopWaitTimeInSec 300
Note: you can modify the size and length of the trace by increasing or decreasing -XperfMaxFileMB and -StopWaitTimeInSec when it is initially run.
3. Scenario In State but you wanted to limit the size and length of time with data saved on Z:Data drive instead of C (default): The issue is currently occurring; the following example does NOT need user intervention to stop the trace. The Xperf can grow to 4GB of the memory and runs for 5 minutes with the setting below and this time the resulting data will be saved on Z:Data. You simply need to add -LogFolderPath Z:Data to the command.
.TSSv2.ps1 -Xperf CPU -XperfMaxFileMB 4096 -StopWaitTimeInSec 300 -LogFolderPath Z:Data
4. Scenario Intermittent High CPU and having a tough time capturing data. These commands will wait for the CPU to reach 90%, start a trace and will stop the file from growing larger than 4GB while running for 5 minutes.
.TSSv2.ps1 -Xperf CPU -WaitEvent HighCPU:90 -XperfMaxFileMB 4096 -StopWaitTimeInSec 300
5. Scenario Intermittent High CPU and having a tough time capturing data. These commands will wait for the CPU to reach 90%, start a trace and will stop the file from growing larger than 4GB while running for 100 seconds (1.5 minutes roughly).
.TSSv2.ps1 -Xperf CPU -WaitEvent HighCPU:90 -StopWaitTimeInSec 100
Pro Tip: You can check for additional Xperf/WPR commands by doing a search on the help command files in TSSv2 by typing
.tssv2.ps1 -help at the prompt. When prompted to enter a number or keyword, type xperf or wpr, hit enter, and you will see the options.
Ex: Finding help with keyword ‘xperf’
Be sure to wait for the TSS script to finish, it can take some time (even an hour to finish writing out). The PowerShell will return to type line and the folder in C:MS_Data should zip itself when complete. The location of the script does not determine the location of the data collected. Wait for trace to finish before exiting PowerShell.
Reminder: Just like in the first trace, you learned data collection grows rather large quickly. You should have at least 30% of your overall RAM available as hard drive space. (Example, if you have 8 GB of RAM – then the file can grow to 2.5GB or larger on c:MS_Data.)
Download the Windows ADK (Windows Assessment and Deployment Kit) from this location: Download and install the Windows ADK | Microsoft Learn. Once you download the Windows ADK, you want to install the Windows Performance Toolkit. Double click on the executable (.exe) to start the installation process.
Uncheck everything except Windows Performance Toolkit, then click Install.
Opening the data in the C:MS_DATA folder
When complete, the WPR general TSSv2 command should have placed all collected data into this folder in a zipped file. You will know the trace ran all the way without stopping prematurely when you see the zipped file in C:MS_DATA. There will also be a message in the PowerShell window when the diagnostic completes stating the name of and the location of the zipped file.
You will need to unzip the zipped file to analyze the WPR trace (.etl file). After unzipping, you will see several data collections that can be helpful with analysis. However, what you mainly want to look at is the .etl file which is usually the biggest file located in the folder.
If you double click the .ETL file it should open in WPA, but if not, you can manually open the newly installed application and navigate to your file.
Example:
You can open the .ETL file to view the WPR trace with WPA (Windows Performance Analyzer) by clicking File, Open and then browsing to the file that ends with the .ETL extension.
Step 1. Open WPR trace in WPA and load the Public Symbols. You may also see symbols listed from the NGEN folder (NGEN is part of the folder name) collected at the time the WPR trace was run.
Once symbols are configured simply click Load Symbols
Step 2. Once open you should see a window similar to the screenshot below. Expand Computation on the left and drag CPU Usage (Precise) to the right side of the Window to load. You can also double click CPU Usage (Precise) for it appear on the right side.
You will then see space on the top graph showing, “Trace Rundown”. That is not needed as it is the part of the trace where the script was finishing up. To get rid of the trace rundown, highlight the area before trace rundown, right click, then select “Zoom”.
You can now filter down each of your processes deeper and deeper to try to locate a potential root cause of what is spiking the CPU. You can look to see which processes have the highest weight over on the right-hand columns to help pinpoint the highest consumers. It may be a specific kernel driver, application, process, etc. but this should help point you in the right direction of what process is exhausting resources.
These are the columns you will want to focus on:
Left of Gold Bar:
New Process
New Thread ID
New Thread Stack
Right of Gold Bar:
Waits (us) Sum
Waits (us) Max
Count:Waits Sum
%CPU Usage Sum
You can see the CPU usage is the highest due to CPUSTRESS.EXE in this example. As you filter down you can see the threads that contribute to the max CPU spike which sums up to the top number of the CPU usage. This can be helpful to find out which threads, functions and modules are called for the root cause.
Conclusion:
Once again this is not the only use for the TSSv2 tool. But as you can see, the WPR/Xperf trace is a very complex tool that gathers data from a simple PowerShell command. This can be very efficient for troubleshooting. This article is not meant to cover all scenarios. However, I highly recommend taking some time to learn more about what TSSv2 can accomplish as this tool will only continue to get better.
If at any point you get stuck don’t hesitate to open a support case with Microsoft.
Additional Information:
Information on TSSv2 and alternative download site:
Recent Comments