Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center 

Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center 

This article is contributed. See the original author and article here.

The contact center industry is at an inflection point. AI agent performance measurement is becoming essential as contact centers shift toward autonomous resolution. Gartner predicts that by 2029, AI agents will autonomously resolve 80% of common customer service issues. Yet, despite massive investment in conversational AI, most organizations lack a coherent way to measure whether their AI agents are good. Traditional metrics like AHT, CSAT, and others are important to track business results. However, they are trailing signals and don’t tell you whether an AI agent is competent, reliable, or most importantly improving

This isn’t just a technical problem. It’s a business problem. Without rigorous measurement, companies can’t improve their agents, can’t demonstrate ROI, and can’t confidently deploy AI to handle their most valuable customer interactions. 

What Makes a Great Customer Service Agent? 

In 2017, Harvard Business Review published research that challenged everything the industry believed about customer service excellence. The study, based on data from over 1,400 service representatives and 100,000 customers worldwide, revealed a truth which goes against many support manuals. Customers don’t want to be pampered during support interactions. They just want their problems solved with minimal effort and maximum speed. This research also highlights why strong AI agent performance measurement is required to benchmark these behavioral models.

The research team identified seven distinct personality profiles among customer service representatives. Two profiles stand out as particularly instructive for understanding AI agent design: 

Empathizers are agents most managers would prefer to hire. They are natural listeners who prioritize emotional connection. They validate customer feelings, express genuine concern, and focus on making customers feel heard. When a frustrated customer calls about a billing error, an Empathizer responds with warmth: “I completely understand how frustrating that must be. Let me look into this for you and make sure we get it sorted out.” Empathizers excel at building rapport and defusing tension. Managers love them, 42% of surveyed managers said they’d preferentially hire this profile. 

Controllers take a fundamentally different approach. They’re direct, confident problem-solvers who take charge of interactions. Rather than asking customers what they’d like to do, Controllers tell them what they should do. When that same frustrated customer calls about a billing error, a Controller responds differently. “I see the problem. There’s a duplicate charge from October 15th. I’m removing it now and crediting your account. You’ll see the adjustment within 24 hours. Is there anything else I can help you fix today? ” Controllers are decisive, prescriptive, and focused on the fastest path to resolution. 

Here’s what the HBR research revealed: Controllers dramatically outperform Empathizers on virtually every quality metric that matters: customer satisfaction, first-contact resolution, and especially customer effort scores. Yet only 2% of managers said they’d preferentially hire Controllers. This does not eliminate the need for empathetic agents but clarifies that empathy is necessary but not enough. 

This insight becomes even more important when we consider the context of modern customer service. Nearly a decade of investment in self-service technology means that by the time a customer reaches a human or an AI agent, they’ve already tried to solve the problem themselves. They’ve searched for the FAQ, attempted the chatbot, maybe even watched a YouTube tutorial. They’re not calling because they want to chat. They’re calling because they’re stuck, frustrated, and need someone to take charge and fix their problem. 

The HBR research quantified this: 96% of customers who have low-effort service experience intend to re-purchase from that company, directly translating into higher retention and recurring revenue. For high-effort experiences, that number drops to just 9%. Customer effort is four times more predictive of disloyalty than customer satisfaction. 

The AI Advantage: Dynamic Persona Adaptation 

Human agents are who they are. An Empathizer can learn Controller techniques, but their natural instincts will always pull toward emotional validation. A Controller can practice active listening, but they’ll always be most comfortable cutting the chase. Training can shift behavior at the margins, but a fundamental personality is remarkably stable. 

AI agents can learn from the best human agents and adapt their style in real time based on conversation context. A well-designed agent can operate in Controller mode for straightforward technical issues- direct and prescriptive-and shift to Empathizer mode when a customer shares difficult news. It adapts mid-conversation based on sentiment, issue complexity, and customer preferences. 

This isn’t about mimicking personality types. It’s about dynamically deploying the right approach for each moment of each interaction. The best AI agents don’t choose between being helpful and being efficient. They recognize that true helpfulness often means being efficient. They adapt their communication style to what each customer needs in each moment. 

But this flexibility adds to the fundamental measurement challenges for both human and AI agents’ evaluation. There is no single “best” conversation. All interactions are highly dynamic with no fixed reference for comparison, and the most important business metrics are trailing and hard to attribute at the conversation or agent level. As a result, no single metric can capture this complexity. We need a framework that evaluates agent capabilities across contexts. 

Defining Excellence: What the Best AI Agents Achieve 

Before introducing a measurement framework, let’s establish benchmarks that framework, let’s establish benchmarks that define world-class performance. 

First-Contact Resolution (FCR) measures whether the customer’s issue was fully resolved without requiring a callback, transfer, or follow-up. Industry average sits around 70-75%.  This matters because FCR correlates directly with customer satisfaction: centers with high FCR see 30% higher satisfaction scores than those struggling with repeat contacts. 

Customer Satisfaction (CSAT) captures how customers feel about their interaction. The industry average, measured via post-call surveys, hovers around 78%. World-class performance means 85% or higher. Top performers in 2025 are pushing toward 90%. 

Response Latency is particularly critical for voice AI. Human conversation has a natural rhythm, roughly 500 milliseconds between when one person stops speaking, and another responds. AI agents that exceed this threshold feel unnatural. Research shows that customers hang up 40% more frequently when voice agents take longer than one second to respond. The target for production voice AI is 800 milliseconds or less, with leading implementations achieving sub-500ms latency. 

Average Handle Time (AHT) varies significantly by industry. Financial services averages 6-8 minutes, healthcare 8-12 minutes, technical support 12-18 minutes. The key insight is that AHT should be minimized without sacrificing resolution quality. Fast and wrong is worse than slow and right, but fast and right is the goal. 

These benchmarks provide targets, but they are trailing signals and don’t tell us how to build agents that achieve them. For that, we need to understand the three pillars of agent quality. 

The Three Pillars: Understand, Reason, Respond 

Every customer interaction, whether with a human or an AI, follows the same fundamental structure. The agent must understand what the customer is saying, reason about how to help, and deliver an effective answer. The key is that any weakness in any pillar undermines the entire interaction. LLM benchmarks are fragmented and do not provide a holistic and focused view into contact center scenarios. 

Pillar One: Understand 

The first challenge is accurately capturing and interpreting customer input. For voice agents, this means speech recognition that works in real-world conditions of background noise, accents, interruptions, domain-specific terminology. For video or images, it means visual understanding that handles varying noise, object occlusion, and context-dependent interpretation. Classic benchmarks are misleading here. Models achieving 95% accuracy on clean test data often fall to 70% or below in production environments with crying babies, barking dogs, and customers calling from their cars. Additionally, interruptions and system latency are key challenges that impact understanding score quality. 

Beyond transcription, understanding requires intent determination. When a customer says, “I’m calling about my order. I think it was delivered to the wrong address,” the agent needs to identify both the topic (order delivery) and the specific issue (wrong address). The measure needs to detect that this is a complaint requiring resolution, not just an informational query. And ideally, it should pick up on emotional cues: frustration, urgency, confusion, all that should influence how it responds. 

Key metrics for this pillar include word error rate for transcription accuracy, intent recognition precision and recall, and latency from when the customer stops speaking to when the agent begins responding. Interruption rates also matter. Agents that talk over customers while they’re still speaking destroy the conversational experience. 

Pillar Two: Reason 

Understanding what the customer said is only the beginning. The agent must then determine the right course of action. This is where “intelligence” in artificial intelligence matters. 

Effective reasoning means connecting customer intent to appropriate actions. If the customer needs their address changed, the agent should access the order management system, verify customer identity, make the change, and confirm success. If the issue is more complex (say, the package was marked delivered but never arrived), the agent needs to pull tracking information, assess whether this looks like miss-delivery, determine whether a replacement or refund is appropriate, and potentially flag the case for investigation. 

This pillar also encompasses multi-turn context management. Customers don’t speak in complete, self-contained utterances. They reference previous statements, use pronouns, and assume the agent is tracking the conversation. “What about my other order?” only makes sense if the agent remembers discussing a first order. “Can you do that for my husband’s account too?” requires understanding what “that” refers to and what permissions are appropriate. 

Perhaps most critically, reasoning quality includes knowing what the agent doesn’t know. A well-designed agent admits uncertainty rather than fabricating answers. This is particularly challenging in the LLM where models are trained to produce answers no matter what. There are two parts to that problem, one the agent should reason and ask for additional data. In truly autonomous agents such interactions should go beyond slot filling or interview. It needs to be dynamic, adaptive, and contextual.  When the agent feels stuck, it should admit that and either ask for help from supervisor or simply escalate. In any case, responsible AI guardrails and validations are key to ensuring proper agent responses and guarded interactions.  

Key metrics include intent resolution rate, task completion rate, context retention across turns, and hallucination frequency. 

Pillar Three: Respond 

The final pillar is delivering the response effectively. Even perfect understanding and flawless reasoning mean nothing if the agent can’t communicate the resolution clearly. 

Answer quality encompasses both content and delivery. The content must be accurate, complete, and actionable. Customers shouldn’t need to ask follow-up questions because the agent omitted critical information. They shouldn’t be confused by jargon or ambiguous phrasing. 

In a multi-channel, multi-modal agent world, AI agents must adapt how they deliver responses based on the channel and context. Effective delivery is about aligning the form, timing, and tone of responses to the interaction at hand. Emotional Quotient matters regardless of modality. When the tone, voice or interaction feels mechanical, even correct content can lose its impact and undermine trust across channels, the objective remains consistent: ensure responses feel natural, clear, and trustworthy from the customer’s perspective. 

The Controller research is relevant here. The best responses are often more direct than traditional customer service training suggests. Instead of “I’d be happy to help you with that. Let me take a look at your account and see what options might be available for addressing this situation,” top performers say “I see the problem. Here’s what I’m doing to fix it.” 

Key metrics include solution accuracy, response completeness, fluency ratings, and post-response customer sentiment. For voice, prosody and expressiveness scores capture delivery quality. 

To build AI agents that customers truly trust, organizations must move beyond fragmented metrics and isolated KPIs. Excellence in customer service is not the result of a single capability. It emerges from how well an agent performs across the three pillars. These pillars form the foundation of modern AI agent performance measurement.

A Composite Score as Unified Measure  

We believe the future of AI agent evaluation lies in a composite approach, the one that brings together these core capabilities into a unified measure of quality.  However, no single metric can tell you whether an AI agent truly works well with real customers. Individual measures tend to over-optimize narrow behaviors while hiding the trade-offs between speed, accuracy, reasoning quality, and customer experience.  
 

A composite score solves this problem by balancing multiple dimensions into one holistic view of agent performance. This approach reveals strengths and weaknesses at the system level rather than through isolated signals. Most importantly, a unified score enables consistent benchmarking and clearer progress tracking. It gives both executives and practitioners a metric they can confidently use to drive improvement. 

We are introducing a contact center evaluation guideline and a set of metrics designed to holistically assess AI agent performance across the dimensions that matter most in real customer interactions. Rather than optimizing isolated signals, this approach evaluates how effectively an agent understands customer intent, reasons through the problem space, and delivers clear, confident, and timely resolutions. 

These guidelines are intended to provide a practical foundation for teams building, deploying, and scaling AI agents in production. They enable consistent measurement, meaningful comparison, and continuous improvement over time.  

This framework is intended to be open and evaluable by anyone. For a deeper dive into the evaluation framework, recommended metrics, and examples of how this can be applied in practice, please refer to the detailed blog: Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score 

The post Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score 

Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score 

This article is contributed. See the original author and article here.

As self-service becomes the first stop in contact centers, AI agents now define the frontline customer experience. Modern customer interactions span voice, text, and visual channels, where meaning is shaped not only by what is said, but by how it’s said, when it’s said, and the context surrounding it.   

In customer service, this is even more pronounced-customers reaching out for support don’t just convey information. They convey intent, sentiment, urgency, and emotion, often simultaneously across modalities; a pause or interruption on a voice call signals frustration,  blurred document image leads to downstream reasoning failures, and flat or fragmented response erodes trust-even if the answer is correct In our previous blog post, we reflected on the evolution of contact centers from scripted interactions to AI-driven experiences. As contact center landscape continues to change, the way we evaluate AI agents must change with them. Traditional approaches fall short by focusing on isolated metrics or single modalities, rather than the end-to-end customer experience. 

Contact centers struggle to reliably assess whether their AI agents are improving over time or across architectures, channels, and deployments. While cloud services rely on absolute measures like availability, reliability and latency, AI agent evaluation today remains fragmented, relative, and modality specific. What would be useful is an absolute, normalized measure of end-to-end conversational quality- one that reflects how customers actually experience interactions and answers the fundamental question: Is this agent good at handling real customer conversations? 

Introducing the Multimodal Agent Score (MAS) 

MAS is built on the observation that every service interaction- whether human-to-human or human-to-agent- naturally progresses through three fundamental stages: (explored in more detail here: Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center )

  1. Understanding the input – accurately capturing and interpreting what the customer is saying, including intent, context, and signals such as urgency or emotion. 
  1. Reasoning over that input – determining the appropriate actions, managing context across turns, and deciding how to resolve the issue responsibly. 
  1. Responding effectively – delivering clear, natural, and confident resolution in the right tone and format. 

Multimodal Agent Score directly mirrors these stages. It is a weighted composite score (0-100) designed to assess end-to-end AI agent quality across modalities- voice, text, and visual- aligned to how real conversations naturally unfold.  

MAS Dimensions and Parameters 

Conversation Stage  MAS Quality Dimension  What It Measures  Example Parameters
Understanding  Agent Understanding Quality   how well the agent hears and understands the user (e.g., latency, interruptions, speech recognition accuracy)   Intent-determination, Interruption, missed window 
Reasoning  Agent Reasoning Quality  how well the agent interprets intent and resolves the user’s request   Intent-resolution, acknowledgement 
Response  Agent Response Quality  how well the agent responds, including tone, sentiment, and expressiveness    CSAT, Tone stability 

Computing each MAS score:

MAS is computed as a weighted aggregation of three quality dimensions stated in the table above. 

where: 

  • Qj represents one of the three quality dimensions: Agent Understanding Quality (AUQ), Agent Reasoning Quality (ARQ)Agent Response Quality (AReQ)  
  • wj represent the costs or weights of each dimension 
  • αj captures the a priori probability of the respective dimension  

Computing each MAS dimension: 

Computing each MAS dimension (AUQ, ARQ, AReQ) involves aggregating underlying parameters into a single weighted score. Raw measurements (such as interruption, intent determination, or tone stability) are first normalized into a 0–1 score before aggregating them at the dimension level. We apply a linear normalization function clipping each raw measurement at predefined thresholds suitable for the parameter being measured (for example, maximum allowed interruption or minimum required accuracy). This maintains the sensitivity of each parameter in the relevant effective range and avoids the negative impact of measurement outliers, making MAS an absolute measure of agent quality. 

MAS in Practice: Voice Agent Evaluation Example 

To ground MAS in real-world conditions, we evaluated ~2,000 synthetic voice conversations across two agent configurations using identical prompts and scenarios: 

  • Agent-1: Chained voice agent using a three-stage ASR–LLM–TTS pipeline 
  • Agent-2: Real-time voice agent using direct speech-to-speech architecture  

The evaluation dataset included noise, interruptions, accessibility effects, and vocal variability to simulate production environments.  

Shown below is a comparison of core MAS metrics, including dimension-level scores and the overall MAS score. 

Voice Evaluation Results (Excerpt) 

Dimension  Parameters   Agent-1  Agent-2 
AUQ  Interruption Rate (%)  0.045  0.025 
AUQ  Missed Response Windows  0.00045  0.0015 
ARQ  Intent Resolution  0.13  0.08 
ARQ  Acknowledgement Quality  0.08  0.10 
AReQ  CSAT  0.128  0.126 
AReQ  Tone stability  0.16  0.14 

Key Observations  

MAS provides flexibility to surface quality insights at an aggregate level, while enabling deeper analysis at the individual parameter level. To better understand performance outliers and anomalous behaviors, we went beyond composite scores and analyzed agent quality at the individual parameter level. This deeper inspection allowed us to attribute observed degradations to specific factors: Example: 

  1. Channel quality matters: Communication channels introduce multiple challenge such as latency, interruptions, compression and loss of information, penalizing recognition and response quality. 
  1. Turn-taking quality is critical: Missed windows and interruptions strongly correlate with abandonment. 
  1. Tone and coherence matter: Cleaner audio and uninterrupted responses lead to higher acknowledgement and perceived empathy. 
  1. MAS reveals root causes: Differences in scores clearly distinguish understandingreasoning, and response failures-something single metrics cannot do. 

Looking Forward 

We will continue to refine and evolve MAS as we validate it against real-world deployments and business outcomes. As the Dynamics 365 Contact Center team, we aim to establish MAS as our quality benchmark for evaluating AI agents across channels. Over time, we also intend to make MAS broadly available, extensible, and pluggable, enabling organizations to adapt it, to evaluate their contact center agents across modalities. For readers interested in the underlying methodology and mathematical foundations, a detailed research paper will be published separately. 

The post Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Announcing General Availability of Proactive Voice Enhancements in Dynamics 365 Contact Center 

Announcing General Availability of Proactive Voice Enhancements in Dynamics 365 Contact Center 

This article is contributed. See the original author and article here.

We’re excited to announce the general availability of proactive voice engagements in Dynamics 365 Contact Center, delivering enterprise-grade outbound calling for service scenarios. We want to thank everyone who participated in the preview and shared valuable feedback. This release introduces key capabilities customers asked for during preview, including Answering Machine Detection, SIP based call outcomes, and the predictive dial mode. These will enable organizations to operationalize proactive voice scenarios with greater accuracy, consistency, and reliability. 

Answering Machine Detection (AMD) 

Customers can enable AMD through the answering machine detection system topic in Copilot Studio. When a machine is detected, the system automatically follows the predefined flows, like playing a customized message or ending the call. This improves predictability across outbound engagements by helping teams avoid nonproductive connections.  

SIP Based Call Outcomes 

Proactive engagement now captures detailed call outcomes using SIP codes. This allows every outbound call to be classified with results such as LiveAnswerAnsweringMachineBusyNoAnswerInvalidAddress, and other states. These outcomes are logged automatically and provide clear insight into how each call concluded without requiring additional configuration. This classification supports more accurate reporting and helps teams determine appropriate next steps. 

Predictive Dial Mode for Service Scenarios 

The predictive dial mode places calls ahead of CSR availability by estimating when CSRs will become free. By using metrics like abandonment rate and average wait time, it can pace how quickly calls are initiated. Organizations can begin managing higher volume service operations efficiently by increasing the likelihood a customer connects at the moment an CSR becomes available. This improves both throughput and customer experience.  

What’s Next 

As proactive engagement continues to mature, we are focused on expanding channel coverage and strengthening dialing performance. This will deliver more flexible options for connecting with customers at scale. 

  • Conversational SMS: Support for proactive engagement in SMS channel now in preview. Organizations can reach customers using their preferred medium while maintaining the same routing, outcome tracking, and compliance standards established for voice. 
  • Improvements to preview dialing: Preview dial mode enhancements give representatives more context prior to each call. Reviewing customer details and deciding when to initiate the connection gets simpler.

Learn more about proactive engagement

To learn more, read the documentation: Configure proactive engagement | Microsoft Learn

Try the preview and ensure your organization stays ahead of customer expectations. Send your feedback to pefeedback@microsoft.com.

The post Announcing General Availability of Proactive Voice Enhancements in Dynamics 365 Contact Center  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Agentic AI for inventory to deliver: From procurement to fulfillment

Agentic AI for inventory to deliver: From procurement to fulfillment

This article is contributed. See the original author and article here.

When customers place an order, they expect speed, accuracy, and reliability. Behind the scenes, inventory-to-deliver processes are what makes that promise possible, helping to ensure the right products are available at the right time to meet customer expectations while controlling costs. For operational professionals, inventory isn’t just a number on a spreadsheet, it’s the lifeline of the supply chain. It determines whether you can fulfill demand without delays, avoid costly stockouts, and keep working capital flowing. From procurement and production to fulfillment and customer satisfaction, inventory-to-deliver impacts every aspect of the supply chain.

In today’s fast-paced market, poor inventory visibility can lead to stockouts, excess holding costs, and missed revenue opportunities. Conversely, a well-orchestrated inventory strategy drives efficiency, reduces waste, and strengthens resilience against disruptions. It enables businesses to optimize working capital, improve cash flow, and deliver on promises consistently. So, how can an agent-ready enterprise resource planning (ERP) platform reinvent the inventory-to-deliver process?

const currentTheme =
localStorage.getItem(‘blogInABoxCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/ERPprocessAgents_tbmnl_en-us?wid=1280″,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/ERPprocessAgents-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/ERPprocessAgents-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/ERPprocessAgents-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/ERPprocessAgents-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69851fdecd72e”, options);
});

Microsoft Cloud and agent platform enables inventory to deliver transformation

Microsoft Dynamics 365 can transform inventory management from a reactive task into a strategic advantage with an agent-ready foundation that spans across finance, supply chain, sales, and operations for a single source of truth that is both scalable and secure.

This same data foundation enables customers to buy, build, and customize agents to infuse across processes. For a refresher on understanding the agent landscape available today, visit Reinventing business process with AI: Agents in record to report where we explore the difference between first party, third party, and custom agents.

Automate vendor communication with a first party agent from Dynamics 365

The Supplier Communications Agent in Dynamics 365 Supply Chain Management is designed to automate routine procurement communications between purchasing teams and vendors. Traditionally, these interactions—such as following up on purchase orders or confirming changes—are manual, repetitive, and often handled via email, even in organizations using electronic data interchange (EDI). The Supplier Communications Agent can streamline these low-complexity tasks by automating vendor outreach and updates, freeing procurement professionals to focus on strategic activities. This not only seeks to improve efficiency but also reduces overall procurement costs by minimizing time spent on administrative work.

Explore partner agents to support the inventory to deliver process

Model Context Protocol (MCP) servers are configurable bridges between the business data within your line-of-business apps and the partner or custom-built agents you want to use. MCP serves as a universal intermediary, unlocking access to a unified platform and app data, modernizing how AI agents are interoperable with your apps. Let’s explore a few partner-built agents that will help you realize value across your supply chain today.

Warehouse Advisor Agent by MCA Connect

The Warehouse Advisor Agent leverages machine learning and predictive analytics to automate and improve key processes such as slotting, inventory consolidation, and cycle counting. By analyzing real-time data and historical trends, the agent delivers actionable insights that help warehouse teams make smarter, faster decisions.

This solution is ideal for warehouse managers, operations leaders, and supply chain professionals in distribution and manufacturing industries who are looking to reduce inefficiencies, improve inventory accuracy, and increase labor productivity. It integrates seamlessly with Dynamics 365’s Warehouse Management System (WMS), enabling users to deploy intelligent automation without disrupting existing workflows.

Inventory Acquisition and Re‑Balancing Agent from RSM

The Inventory Acquisition and Re‑Balancing Agent from RSM enables smarter inventory decisions by analyzing demand signals, supply availability, and stock imbalances in Dynamics 365. The agent can recommend rebalancing and acquisition actions to reduce stockouts, minimize excess inventory, and improve working capital efficiency.

Inbound Load Agent from Fellowmind

Fellowmind’s Inbound Load Agent can streamline inbound logistics by intelligently composing and optimizing loads based on demand, capacity, and operational constraints within Dynamics 365. The agent seeks to help logistics teams reduce transportation costs, improve warehouse utilization, and simplify complex inbound planning decisions.

Get started with agents for inventory-to-deliver processes

The Microsoft platform brings together secure, scalable cloud services with Dynamics 365’s unified ERP capabilities to streamline the entire inventory-to-delivery process. By leveraging real-time data and intelligent workflows, businesses gain supply chain agility to better meet customer expectations with precision. Partner-built agents, powered by MCP, amplify this value, enabling autonomous actions and predictive insights that transform operations from reactive to proactive. Together, these innovations create a resilient, future-ready foundation for delivering efficiency and growth at scale.

The post Agentic AI for inventory to deliver: From procurement to fulfillment appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Price Override in Project Operations | Part 2 [Understanding Change Amount effect]

Price Override in Project Operations | Part 2 [Understanding Change Amount effect]

Here’s what the Change Amount and Change Percentage mean when you are trying to do Price Overrides in Project Operations. You need to be careful and double check before proceeding.

The post Price Override in Project Operations | Part 2 [Understanding Change Amount effect] appeared first on D365 Demystified.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales 

From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales 

This article is contributed. See the original author and article here.

Every seller knows how much time gets lost between selling moments. Information arrives in many forms—emails, screenshots, documents, handwritten notes—and turning that into structured CRM data often means manual copying, rework, or skipped fields altogether. At the same time, answering everyday questions like “Which leads should I follow up on?” or “How is my pipeline shaping up right now?” can require complex filters, multiple views, or exporting data just to get a clear answer.

Dynamics 365 Sales is evolving to address these challenges with agentic assistance. Instead of sellers adapting to rigid forms, grids, and filters, agentic AI in Dynamics 365 Sales now adapts to how sellers naturally work—by understanding unstructured inputs, interpreting intent, and assisting directly at the point of action. Two purpose-built agents bring this to life:

  • A Data Entry Agent that uses LLMs to understand pasted content and uploaded files, extract relevant details, and quickly populate CRM forms for faster lead and contact creation.
  • A Data Exploration Agent helps sellers quickly understand trends across opportunities, leads, or accounts by turning natural language questions into filtered views and visual insights.

Together, these agents reduce two of the biggest productivity drains in sales—manual data entry and cumbersome data exploration—so sellers can spend less time managing CRM and more time engaging customers.

Let’s look at how these experiences use agentic AI in Dynamics 365 in real sales scenarios:

Capture sales data faster with the Data Entry Agent
Accurate customer data is critical, but sellers encounter information in many forms—emails, websites, documents, and business cards. The Data Entry Agent uses large language models to understand unstructured text and files, infer intent, and map extracted details to the right CRM fields, without requiring sellers to manually interpret or retype information.

Capture Lead and Contact details instantly with Smart Paste

When a seller receives an inbound email from a prospect, creating a lead often means manually copying names, email addresses, phone numbers, and company details into CRM. For example, a prospect may write:

You want to respond quickly, but first you need to log the lead.

With Smart Paste (Preview), sellers can copy the email content, navigate to the lead or contact form. The system analyzes the copied text, extracts key details such as name, company, email, and phone number, and suggests values inline for the relevant fields. Each suggestion includes inline citation from the email, so sellers can clearly see the source of the information.

Sellers can review AI-generated field suggestions, view citations, accept what looks right, and save—enabling faster lead capture with greater confidence in data accuracy.

Similarly, a seller may be reviewing a prospect’s website or LinkedIn profile in separate tabs. Instead of manually re-entering details later, they can copy text from the company’s About Us page or the prospect’s LinkedIn profile and paste it directly into a CRM form. The agent analyzes the content and suggests values such as industry, company name, location, and job title, allowing the seller to review and apply the information immediately while the context is still fresh.

Convert Physical Documents into CRM Records with Files (Preview)

After trade shows, conferences, or in-person meetings, sellers often return with a stack of business cards or documents from dozens of conversations. Manually transcribing this information delays follow-up and increases the chance of errors.

With Files (Preview), sellers can upload images of business cards or documents such as .txt, .docx, .csv, .pdf, .png, .jpg, .jpeg, or .bmp, directly into the form. The system analyses the uploaded files and suggests values for relevant fields, including names, titles, company details, email addresses, and phone numbers. Sellers simply review and confirm the suggestions, turning what once took hours into minutes.

This enables faster post-event follow-up and more complete lead and contact records.

Find and understand sales data faster with the Data Exploration agent

Finding the right records and understanding trends is essential for sellers, but navigating views and filters can be time-consuming. Powered by natural language understanding, the Data Exploration Agent (Preview) translates seller questions into structured filters, allowing users to interact with CRM data using plain language instead of complex query logic, making it easier to plan, prioritize, and understand pipeline health directly within their views.

Find the right records faster using Natural Language in Views

Filtering records in CRM can be time-consuming, especially when multiple criteria are involved. Imagine planning your day and opening My Open Leads to focus on recent campaign responses. Instead of building complex filters, you simply type: “Leads from the Summer Campaign created last month.”

Or, when preparing for a forecast call, you search: “Opportunities from Technology accounts closing next quarter.”

The system interprets the request and automatically applies the appropriate filters to the view. Sellers can review and modify the filters if needed, giving them both speed and control. This simplifies daily planning, follow-ups, and pipeline reviews.

Understanding trends often requires more than scanning rows of data, but building dashboards or exporting reports isn’t practical for day-to-day sales work. With Visualize (Preview), sellers can turn the filtered data they’re already viewing into interactive charts with a single click—directly within the view and without breaking their flow.

Because the visualization is generated from the current view and visible columns, it automatically reflects the exact filters, segments, and scope the seller is working with. Sellers can hover to see detailed values, drill into specific segments, and switch chart types on the fly as new questions come up. This makes it easy to answer questions like “Where are most of my open opportunities concentrated?”, “Which lead sources are driving volume right now?”, or “How is my pipeline distributed across stages?”

Visualize is designed for quick, in-the-moment understanding, not deep reporting. It complements Power BI by giving sellers immediate visual insight at the point of work—without creating reports, navigating dashboards, or leaving CRM—so they can recognize patterns and act faster while staying in flow.

Enable these agentic capabilities in Power Platform Admin Center

  • To enable Data Entry agent capabilities, go to Power Platform Admin CenterSettingsProductFeatures.
    Under AI form fill assistance, turn On
    • Automatic suggestions
    • Smart paste and file suggestions and
    • Form fill assist toolbar. Changes apply to model-driven apps once saved.
  • To enable Data Exploration agent capabilities, go to Power Platform Admin CenterSettingsProductFeatures.
    • Under Natural language grid and view search, set Enable this feature for to All users immediately
    • Turn On Allow AI to generate chartsto visualize the data in a view and enable AI-generated chart styling for a consistent visual experience.

Focus More on Selling, Less on Administration

With agentic AI in Dynamics 365 Sales, the platform evolves from a system of record into a system that understands, assists, and adapts—helping sellers spend more time selling and less time managing CRM.


The post From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

6 core capabilities to scale agent adoption in 2026

6 core capabilities to scale agent adoption in 2026

This article is contributed. See the original author and article here.

Learn six core capabilities organizations need to support agent adoption at scale in 2026, from governance and security to empowerment and operations.

The post 6 core capabilities to scale agent adoption in 2026 appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

6 core capabilities to scale agent adoption in 2026

The 6 pillars that will define agent readiness in 2026

This article is contributed. See the original author and article here.

Learn six practical ways to build and scale agents with Microsoft Copilot Studio while supporting enterprise adoption and governance.

The post The 6 pillars that will define agent readiness in 2026 appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

6 core capabilities to scale agent adoption in 2026

College Students now get 12 months of Microsoft 365 Premium and LinkedIn Premium Career on us 

This article is contributed. See the original author and article here.

We’ve put together something special just for students—a limited time offer that helps you study smarter, stand out in your job search, and turn your ambitions into reality.

The post College Students now get 12 months of Microsoft 365 Premium and LinkedIn Premium Career on us  appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.