Introducing a new era of AI-powered experiences in Dynamics 365 Business Central

Introducing a new era of AI-powered experiences in Dynamics 365 Business Central

This article is contributed. See the original author and article here.

This is an exciting time in the evolution of AI, and you don’t have to look far to see the headlines on ways it is changing the world. From advanced machine learning models to breakthrough natural language technology, new opportunities to use AI to improve the way we work are surfacing daily. AI innovation can free employees from mundane, repetitive tasks and allow them to focus on the work that matters most, increasing job satisfaction and pushing productivity to new heights. According to our recent survey on business trends, 89 percent of those with access to automation and AI-powered tools feel more fulfilled because they can spend time on work that truly matters.1

Microsoft Dynamics 365 Business Central brings the power of AI to small and medium-sized businesses with features that help companies work smarter, adapt faster, and perform better. Let’s explore some of the ways that AI in Business Central is improving how work gets done, including:

  • Automating repetitive tasks
  • Improving customer service
  • Anticipating business challenges
  • Enhancing decision making

Automate repetitive tasks with Copilot in Business Central

Microsoft Dynamics 365 Copilot introduces the next generation of AI-powered experiences to Microsoft Dynamics 365 business applications. Dynamics 365 Copilot provides AI assistance directly in the flow of work using natural language technology, automating repetitive tasks, and unlocking creativity. Dynamics 365 Copilot is the world’s first copilot in both customer relationship management (CRM) and enterprise resource planning (ERP), ushering in the new age of productivity for businesses of all sizes.

With Copilot in Business Central, product managers can save time and drive sales with engaging AI-generated product descriptions. Banish writer’s block with unique, compelling marketing text created in seconds using product attributes such as color, material, and size. Tailor the descriptions to your brand by choosing a tone of voice, as well as format and length. Complete the process by publishing to Shopify or other ecommerce stores with just a few clicks. Copilot in Business Central makes launching new products fast and easy so you can focus on growing your business. Try Copilot in Business Central today.

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

Improve customer service with Sales and Inventory Forecasting  

In a highly competitive business landscape, effective inventory management can be the key differentiator between a successful business and one that struggles to retain customers and remain profitable. Inventory management is a trade-off between customer service and managing your costs. While low inventory requires less working capital, inventory shortages can lead to missed sales. Using AI, the Sales and Inventory Forecast extension predicts future sales using historical data to help you avoid stockouts. Based on the forecast and inventory levels, the extension helps create replenishment requests to your vendors, helping you save time and improve inventory availability to keep your customers happy.

Anticipate business challenges with Late Payment Predictions

Effectively managing receivables is critical to the overall financial health of a business. The Late Payment Prediction extension can help you reduce outstanding receivables and fine-tune your collections strategy by predicting whether sales invoices will be paid on time. For example, if a payment is predicted to be late, you might decide to adjust the terms of payment or the payment method for the customer. By anticipating late payments and making adjustments, you can better manage and ultimately reduce overdue receivables.

Enhance decision-making with Cash Flow Analysis

Azure AI in Business Central helps you create a comprehensive cash flow forecast with Cash Flow Analysis, enhancing decision-making so you can stay in control of your cash flow. A company’s cash flow indicates its financial solvency and reveals whether the company can meet its financial obligations in a timely manner. To make sure that your company is solvent, a future-oriented planning instrument is necessary. With insights from AI, you can make proactive adjustments to ensure your company’s fiscal health, such as reducing credit when you have a cash surplus or borrowing to mitigate a cash deficit.

Innovate with Business Central

In today’s fast-paced market, AI has become essential for companies looking to stay ahead of the competition. The AI tools built into Business Central can help you improve the end-to-end customer experience, reduce costs, and boost financial success. With the ability to automate repetitive tasks, analyze data, and offer personalized recommendations, Business Central can help you operate more efficiently and grow your business.

A woman standing at a table in an office, looking at a laptop.

Dynamics 365 Business Central

Adapt faster, work smarter, and perform better with Business Central.


Sources

1Four Ways Leaders Can Empower People for How Work Gets Done

The post Introducing a new era of AI-powered experiences in Dynamics 365 Business Central appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

How to perform Error Analysis on a model with the Responsible AI dashboard (Part 4)

How to perform Error Analysis on a model with the Responsible AI dashboard (Part 4)

This article is contributed. See the original author and article here.

Traditional performance metrics for machine learning models focus on calculations based on correct vs incorrect predictions.  The aggregated accuracy scores or average error loss show how good the model is, but do not reveal conditions causing model errors. While the overall performance metrics such as classification accuracy, precision, recall or MAE scores are good proxies to help you build trust with your model, they are insufficient in locating where in data the model has inaccuracies.  Often, model errors are not distributed uniformly in your underlying dataset.  For instance, if your model is 89% accurate, does that mean it is 89% fair as well? 


 


Model fairness and model accuracy are not the same thing and must be considered. Unless you take a deep dive in the model error distribution, it would be challenging to discover the different regions of your data for where the model is failing 42% of the time (see the red region in diagram below). The consequence of having errors in certain data groups can lead to fairness or reliability issues. To illustrate, the data group with the high number of errors may contain sensitive features such as age, gender, disabilities, or ethnicity. Further analysis could reveal that the model has a high error rate with individuals with disabilities compared to ones without disabilities.  So, it is essential to understand areas where the model is performing well or not, because the data regions where there are a high number of inaccuracies in your model may turn out to be an important data demographic you cannot afford to ignore.


 


ea-error-distribution.png


 


This is where the error analysis component of Azure Machine Learning Responsible AI (RAI) dashboard helps in identifying a model’s error distribution across its test dataset. In the last tutorial, we created an RAI dashboard with a diabetes hospital readmission classification model we trained. In this tutorial, we are going to explore how data scientists and AI developers can use Error Analysis to identify the error distribution in the test records and discover where there is a high error rate from the model. In addition, we’ll learn how to create cohorts of data to investigate why a model is performing poorly in some cohorts and not others.  Lastly, we will utilize the various methods available in the component for error identification: Tree map and Heat map.


 


Prerequisites


 


This is Part 4 of a tutorial series. You’ll need to complete the prior tutorial(s) below:


 




 


How to interpret Error Analysis insights


 


Before we start our analysis, let’s first understand how to interpret the data provided by the Tree map. The RAI dashboard illustrates how model failure is distributed across various cohorts with a tree visualization. The root node displays the total number of incorrect predictions from a model and the total test dataset size. The nodes are groupings of data (aka cohorts) that are formed by splits from feature conditions (e.g., “Time_In_Hospital < 5” vs “Time_In_Hospital ≥ 5”). Hovering the mouse over each node on the tree reveals the following information for the selected feature condition:


 


error nodes.png


 


 



  • Incorrect vs Correct predictions: The number of incorrect vs correct predictions for the datapoints that fall in the node.

  • Error Rate: represents the number of error occurrence in the node.  The shade of red shows what percentage of this node’s datapoints are receiving erroneous predictions. The darker the red the higher the error rate.

  • Error Coverage: represents how many of your model’s overall errors are happening in a given node. The fullness of the node shows the coverage of errors the node has. The fuller the node, the higher error coverage it has.


 


Identifying model errors from a Tree Map


 


Now let’s start our analysis. The tree map displays how model failure is distributed across various data cohorts. For our diabetes hospital readmission model, one of the first things we observe from the root node is that out of the 994 total test records, the error analysis component found 168 errors while evaluating the model. 


 


ea-treemap.png


 


The tree map provides visual indicators to make locating nodes or tree path with the error rate quicker.  In the above diagram, you can see the tree path with the darkest red color has a leaf-node on the bottom right-hand side of the tree. To select the path leading up to the node, double-click on the leaf node. This highlights the path and displays the feature condition for each node in the path.  Since this tree path contains nodes with the highest error rate, it is a good candidate to create a cohort with the data represented in the path in order to later perform analysis to diagnose the root cause behind the errors.


 


1-select-error-tree.png


 


According to this tree path with the highest error rate, diabetes patients that have prior hospitalization and taking several medications between 11 and 22 are a cohort of patients where the model has the highest number of incorrect predictions.  To investigate what’s causing the high error rate with this group of patients, we will create a cohort for these groups of patients.


 


Cohort # 1: Patients with number of Prior_Inpatient > 0 days and number of medications between 11 and 22


 


To save the selected path for further investigation. We can use the following steps:


 



  • Click on the “Save as a new cohort” button on the upper right-hand side of the error analysis component. Note:  The dashboard displays the “Filters” with the feature conditions in the path selection: num_medications 11.50, prior_inpatient > 0.00.

  • We’ll name the cohort:Err: Prior_Inpatient >0; Num_meds >11.50 & <= 21.50”.


 


1-save-error-tree.png


 


As much as it’s advantageous in finding out why the model is performing poorly, it is equally important to figure out what’s causing our model to perform well.  So, we’ll need to find the tree path with the least number of errors to gain insights as to why the model is performing better in this cohort vs others. The leaf node with the feature condition on the far left-hand side of the tree, is the path of the tree with the least errors. 


 


1-select-least-error-tree.png


 


The tree reveals that diabetic patients with no prior hospitalization, the number of other health conditions equal or less than 7, and the number of lab procedures equal or less than 57 are a cohort with the lowest model errors.  To analyze the factors that are contributing to this cohort performing better than others, we’ll create a cohort for these group of patients.


 


Cohort # 2: Patients with number of Prior_Inpatient = 0 days and number of diagnoses ≤ 7 and number of lab procedures ≤ 57


 


For comparison, we will create a cohort of feature condition with the lowest error rate.  To achieve this, complete the following steps:



  • Double-click on the node to select the rest of the nodes in the tree path.

  • Click on “Save as a new Cohort” button to save the selected path in a cohort. Note:  The dashboard displays the “Filters” with the feature conditions in the path selection: num_lab_procedures <= 56.50, number_diagnoses <= 6.50, prior_inpatient <= 0.00.

  • We’ll name the cohort: Prior_Inpatient = 0; num_diagnoses <= 6.50; lab_procedures <= 56.50


When we start investigating model inaccuracies, comparing the different features between top and bottom performing cohorts will be useful for improving our overall model quality (we’ll see this later in the next tutorial Part 5.  Stay tuned).


 


Discovering model errors from the Feature list


 


One of the advantages of using the RAI dashboard to debug a model is that it provides the “Feature List” pane, which is a list of feature names in the test dataset that are error contributors (included in the creation of your error tree map). The list is sorted based on the contribution of the features to the errors. The higher a feature is on this list, the higher its contribution importance to your model errors. Note: Not to be confused with the “Feature Importance” section that will later be described in tutorial Part 7 (which explains what features have contributed the most to model predictions). This sorted list is vital to know the problematic features that are causing issues with the model’s performance.  It is also an opportunity to check if sensitive features such as age, race, gender, political view, religion, etc. are among top error contributors.   This is an indicator to examine if your model encounters potential fairness issues.


 


1-view-feature-list.png


 


 


In our Diabetes Hospital Readmission model, the “Feature List” indicates the following features to be among the top contributors of the model’s errors:


 



  • Age

  • num_medications

  • medicare

  • time_in_hospital

  • num_procedures

  • insulin

  • discharge_destination


 


Although, “Age” is a sensitive feature, we must check if there is a potential age bias with the model having a high inaccuracy with this feature. In addition, you may have noticed that not all the features on this list appeared on the Tree map nodes. The user can control how granular or high-level tree map should be displayed the error contributors, from the “Feature List” pane:


 



  • Maximum depth: controls how tall the error tree should be. Meaning the maximum number of nodes that can be displayed from the root node to the leaf node (for any branch)

  • Number of leaves: the total number of features with errors from the trained model. (e.g., 21 is the number of features highlighted on the bar to show the level of error contribution from the list)

  • Minimum number of samples in one leaf: controls the threshold for the minimum number of data samples to create one leaf.


 


Try adjusting the control levels for the minimum number of samples in one leaf field to different values between 1 and 100 to see how the tree expands or shrinks. If you want to see a more granular breakdown of the errors in your dataset, you should reduce the level for the minimum number of samples in one leaf field.


 


Investigating Model Errors using the Heat map


 


The Heat map is another visualization functionality that enables users to investigate the error rate through filtering by one or two features to see where most of the errors are concentrated. This helps you determine which areas to drill down further so you can start forming hypotheses of where the errors are originating.


 


From the Feature List, we saw that “Age” was the top contributor of the model inaccuracies.  So, we’re going to use the Heat map to see which cohorts within the “Age” feature are driving high model errors.


 


ea-heatmap-age.png


 


Under the Heat map tab, we’ll select “Age” in the “Rows: Feature 1” drop-down menu to see its influence in the model’s errors. The dashboard has a built-in intelligence to divide the feature into different cells with the possible data cohorts with the Age feature (e.g., “Over 60 years”, “30–60 years” and “30 years or younger”). By hovering over each cell, we can see the number of correct vs incorrect predictions, error coverage and error rate for the data group represented in the cell. Here we see:


 



  • The cell with “Over 60 years” has 536 correct and 126 incorrect model predictions. The error coverage is 73.81%, and error rate 18.79%. This means that out of 168 total incorrect predictions that the model made from the test data, 126 of the incorrect predictions came from “Age==Over 60 years”.

  • Even though the error rate of 18.79% is low, an error coverage of 73.81% is a huge number. That means a majority of the model’s inaccuracies come from data where patients are older than 60 years old. This is problematic.


 


hm-elder-metrics.png


 



  • The cell with “30–60 years” has 273 correct and 25 incorrect model predictions. The error coverage is 25.60%, and error rate 13.61%. Even though, the patients with “Age==30–60 years” have a very low error rate, the error coverage of 25.60% is a quarter of all the model’s error, which is an issue.

  • The cell with “30 years or younger” has 17 correct and 1 incorrect model predictions. The error coverage is 0.60%, and error rate 5.56%. Having 1 incorrect model prediction is insignificant. Plus, both the error coverage and error rate are low. It’s safe to say the model is performing very well in this cohort, however we must also consider that its total data size of 18 is a very small sample size.


Since our observation shows that Age plays a significant role in the model’s erroneous predictions, we are going to create cohorts for each age group for further analysis in the next tutorial.


 


Cohort #3: Patients with “Age == Over 60 years”


 


Similar to the Tree map, create a cohort from the Heat map by taking the following steps:


 



  1. Click on the “Over 60 years” cell. You’ll see a blue border around the square cell.

  2. Next, click on the “Save as a new cohort” button at the top right-hand corner of the Error Analysis section. A new pane will pop-up with a summary of new cohort, which includes error coverage, error rate, correct/incorrect prediction, total data size, and filters in the data feature.

  3. In the “Cohort name” box, enter “Age==Over 60 years”.

  4. Then click on the “Save” button to create the cohort.

  5. To deselect the cell, click on it again or click on the “Clear all” button.


 


ea-save-heatmp.png


 


Repeat the steps to create a cohort for each of the other two Age cells: 



  • Cohort #4: Patients with “Age == 30–30 years”

  • Cohort #5: Patients with “Age <= 30 years”


If Age is playing a major role in why the model is performing poorly, we can conduct further analysis to better understand this cohort and evaluate if it has an impact in patient returning to the hospital within 30 days or not.


 


Managing cohorts


 


To view or manage all the cohorts you’ve created, click on the “Cohort Settings” gear icon on the upper right-hand corner of the Error Analysis section. In addition, the RAI dashboard creates a cohort called “All data” by default. This cohort contains all the test datasets used to evaluate the model.


 


3-ea-cohort-list.png


 


Conclusion


 


As we have witnessed from using the Tree map, Feature List, and Heat map, the RAI dashboard provides multiple avenues of identifying features causing a model to be erroneous. Although, simply knowing which features are causing the error is not enough. It is beneficial for data scientists or AI developers to understand the number and magnitude of errors a feature has, when debugging a model. The dashboard helps in the process of elimination by pinpointing the error regions by providing the feature conditions to focus on as well as the number of correct/incorrect predictions, error coverage and error rates. This helps in measuring the influence the feature condition errors have on the overall model errors.


 


Discovering the correlation and dependencies between features helps in creating cohorts of data to investigate. Along with exploring the cohorts with the most errors, using the “Feature list” in conjunction with our investigations helps in understanding exactly which features are problematic.  Since with found “Age” to be a top contributor on the “Feature List” and the Heat map also shows a high error coverage for diabetic patients that are Over 60 years in age, we can start forming a hypothesis that there may be an age bias with the model.  We have to consider that given our use case; age plays a role in diabetic cases.  Next, the Tree map enabled us to create data cohort where a model has high vs low inaccuracies.  We found that diabetic patients with prior hospitalization were one of the features in the cohort with the highest error rate.  On the other hand, patients with no prior hospitalization were one of the features in the cohort with the least error rate.


 


As a guide, use error analysis when you need to:


 



  • Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions.

  • Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps.


Awesome! Now…we’ll move on to the “Model Overview” section of the dashboard to start analyzing our cohorts and diagnosing issues with our model.


 


Stay tuned for Part 5 of the next tutorial…


 

See what’s new for Dynamics 365 at the Microsoft Business Applications Launch Event

See what’s new for Dynamics 365 at the Microsoft Business Applications Launch Event

This article is contributed. See the original author and article here.

The first release wave of 2023 kicks off in just a few weeks! You’re invited to get a sneak preview of what’s new at the Microsoft Business Applications Launch Event, streaming live on April 4 at 9:00 AM Pacific Time. Tune in live, or on-demand after the live broadcast, for news and demos showcasing some of the hundreds of product features and innovation heading to Microsoft Dynamics 365 and Power Platform.

Sign up for the event today for reminders, agenda updates, and instructions to tune in live from your desktop or mobile device.

A person sitting at a table using a laptop computer.

Business Applications Launch Event

The latest updates across Microsoft Dynamics 365 and Power Platform

What to expect at the Business Applications Launch Event

Featuring news and product demos, the event will cover updates launching between April and September 2023 across business rolesfrom marketing, sales, and service to supply chain and financeas well as low-code innovation for everyone in the organization. Tune in to learn about:

  • The very latest on AI-powered copilots that work alongside you to help create ideas and content faster, complete time-consuming tasks, and get insights and next best actions. 
  • A preview of new ways to deliver more personalized and engaging customer experiences across marketing, sales, and service.
  • A host of new capabilities that will bring more insight, visibility, and automation across supply chain, operations, and finance roles.
  • New capabilities for low-code development to help teams manage building apps, storing data, and modernizing customer experiences with improved governance and scalability.
  • Updates across Microsoft Power Platform for governance and administration, pro development, ISV experiences, and data integration.

And there’s much more in store. You’ll also see how these solutions work for businesses like yours with demos hosted by Microsoft product experts. You’ll hear from Charles Lamanna, Microsoft Corporate Vice President of Business Applications and Platforms, about how to innovate with business applications to grow your business faster than ever. The team behind the Dynamics 365 and Microsoft Power Platform 2023 release wave 1 will also share insights and guide you through how all these updates, advancements, and new tech will help you:

  • Expand visibility, reduce time, and enhance creativity in your departments and teams with unified, AI-powered capabilities.
  • Empower your employees to focus on revenue-generating tasks while automating repetitive tasks.
  • Connect people, data, and processes across your organization with modern collaboration tools.
  • Innovate without limits using the latest in low-code development, including new next-generation AI capabilities.

AI innovations

A major theme of this digital event is the evolution of AI. Leaders in the field will show you some of the latest developments in AI that are leading the next generation of business applications. You’ll also have the opportunity to join us for a special session that includes a deep dive into the tech behind next-generation AI hosted by Dr. Walter Sun, Microsoft Vice President, AI in Business Applications. Throughout the event, you’ll receive expert guidance on how to build more agile, customer-focused teams by empowering your solutions with AI and see firsthand how to get more value out of your data, collaboration, and tools.

Insights from the experts

Find out how to enhance the customer experience and operational excellence across your business with a look at real-life scenarios in expert-led demos of new capabilities and features. Gain valuable insights for overcoming your current challenges from other Microsoft customers who will share their journeys. And if you have any questions for the experts, ask them during a live Q&A chat.

For a range of key best practices, strategies, and insights applicable to organizations across many industries, you don’t want to miss this digital launch event.

Register now. We hope you’ll join us. 

Microsoft Business Applications Launch Event 

Tuesday, April 4, 2023

9:00 AM10:30 AM Pacific Time (UTC-7)

The post See what’s new for Dynamics 365 at the Microsoft Business Applications Launch Event appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

New Microsoft Loop app is built for modern co-creation

New Microsoft Loop app is built for modern co-creation

This article is contributed. See the original author and article here.

Today is the start of the Microsoft Loop app journey and we’re thrilled to announce the Loop app is available in public preview!

The post New Microsoft Loop app is built for modern co-creation appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Improve customer loyalty and reduce fraud with Nuance Gatekeeper

Improve customer loyalty and reduce fraud with Nuance Gatekeeper

This article is contributed. See the original author and article here.

Nuance Gatekeeper is a biometric authentication and fraud prevention technology that can be integrated with Microsoft Dynamics 365 Customer Service. It offers a streamlined verification process that allows contact centers to identify customers through natural conversation without relying on specific questions, security codes, or secret questions and answers. 

This technology provides a secure and seamless way to verify customer identities so call center agents can focus their attention on serving these individuals. The results include increased customer satisfaction and reduced average call handling times. By using voice biometrics, Nuance Gatekeeper can quickly and accurately verify a customer’s identity, significantly reducing the risk of fraud and protecting customers’ personal data from unauthorized access.   

The integration of Nuance Gatekeeper into the Microsoft Digital Contact Center Platform (DCCP) is a significant step forward in improving the customer experience and enhancing contact center security. It allows businesses to provide fast, personalized service while maintaining the highest levels of security and compliance. 

We are announcing the opportunity to preview Nuance Gatekeeper services for all customers using the voice channel in their contact centers. Microsoft is inviting interested customers to explore the capabilities of this technology and see how it can improve their operations. Customers must nominate themselves at the Dynamics Insider portal to experience the benefits of this innovative solution firsthand. 

Biometric authentication

As part of DCCP and Dynamics 365 Customer Service, Nuance Gatekeeper uses strong biometric authentication to reduce friction and send customers on their way to a fast resolution. They won’t have to share sensitive personal data to authenticate themselves, and they can use voice authentication on an opt-in basis. Customers can opt out of biometric authentication at any time.  

Dynamics 365 Customer Service agents can enroll customers in biometric authentication and verify customer identity in each subsequent call. Nuance Gatekeeper alerts agents to suspected fraudsters and flags fraudulent conversations. Agents have insight into the customer’s full consent history.  

Contact center administrators can configure specific channels to use Nuance Gatekeeper services within their omnichannel environments and determine the parameters for authentication.

graphical user interface, application
Conversation with an authenticated customer in Dynamics 365 Customer Service

Fraud prevention

Stop fraudsters in their tracks while kicking customer interactions into high gear. Nuance Gatekeeper can detect fraudulent calls before they reach the contact center agent while giving legitimate customers the greenlight for personalized support. 

In addition to caller authentication, fraud detection capabilities prevent attackers from usurping your customers’ identities using layered methods including a suspected fraudsters watchlist, and synthetic speech and playback detectors. Additional factors like device, network, and location signals help identify suspicious activity. 

Nuance Gatekeeper checks caller voices against a watchlist to alert the contact center agent about known fraudsters. The agent can then take steps outlined in their organization’s fraud detection policy. If the caller isn’t flagged but the agent finds the caller suspicious, the agent can manually flag the conversation within their workspace in Dynamics 365 Customer Service.  

Fraud teams can use the information gathered by contact center agents to block fraudsters, analyze fraud patterns and trends, and gather data to assist law enforcement. 

graphical user interface, application
Conversation with fraud detected in Dynamics 365 Customer Service

Try Nuance Gatekeeper

Sign up for the DCCP Biometric Authentication and Fraud Prevention preview and learn how you can provide next-level security for your customers.

The post Improve customer loyalty and reduce fraud with Nuance Gatekeeper appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.