This article is contributed. See the original author and article here.
As customer service representatives juggle multiple cases, tabs, and applications throughout their day, interruptions—whether from browser crashes, accidental closures, or system reboots—can be costly. That’s why session restore, also known as app session restore, is a game-changer in Dynamics 365 Copilot Service workspace (CSw). This feature ensures service reps can pick up exactly where they left off, minimizing disruption and maximizing productivity.
Reclaiming lost time
Before session restore, service reps who lost their session due to a crash or logout had to manually reopen each case, retrace their steps, and rebuild their workspace. This not only wasted time but also increased the risk of errors and customer dissatisfaction.
Now, with session restore, CSw automatically saves the state of an service rep’s workspace, including open sessions, tabs, and navigation context. Then, it restores the workspace when the service rep logs in again. Whether a service rep was mid-case or toggling between multiple records, their environment is rehydrated with precision.
Saving your space
Session restore is enabled by default in CSw and works seamlessly behind the scenes. Here’s what it captures:
Open sessions: All active sessions, including customer cases and conversations.
Tab state: The exact session tabs and sub-tabs open within each session.
Navigation context: The specific records and views the service rep was working on.
Productivity tools: The specific productivity tool that was in focus.
When the service rep logs back in, Dynamics 365 reconstructs the workspace as it was, no clicks required.
Real-world impact
Organizations using session restore report significant improvements in efficiency and satisfaction. For example:
Service reps returning from a break or system update can resume work instantly.
Supervisors see fewer escalations due to lost progress.
IT teams spend less time troubleshooting session-related issues.
This feature is especially valuable in high-volume contact centers where every second counts.
This article is contributed. See the original author and article here.
Across industries, organizations are transforming how they deliver value. From engineering to IT services to internal PMOs, leaders are rethinking their operating models to become more service-centric—where success depends on how efficiently projects are delivered, how accurately time is tracked, and how transparently work connects to financial outcomes.
For these organizations, projects are not side activities—they are the business. That shift brings new demands on visibility, control, and integration. It’s no longer enough to simply manage schedules and tasks. Modern project management must connect delivery with the systems that run the enterprise: finance, resource planning, HR, and analytics.
With Microsoft Project Online retiring in September 2026, many organizations are now considering what comes next. The decision isn’t just about replacing a familiar tool—it’s about selecting a solution that supports your business model for the future. For organizations managing service-based operations, Microsoft Dynamics 365 Project Operations delivers the capabilities, flexibility, and innovation to support that evolution.
Why organizations are choosing Dynamics 365 Project Operations
As organizations look beyond Project Online and products like Project Server versions 2016 or 2019, they often share five common needs that shape their evaluation:
Plan and track resources and time in one place
Connect project data with finance and operations systems
Configure and extend workflows through the Power Platform
Support project-based service delivery from proposal to profit
Adopt a solution built for the future of AI-driven work
These needs define where Dynamics 365 Project Operations, and the broader Microsoft Cloud, deliver the solutions required to manage the end-to-end project-to-profit business process.
Unified resource and time management
Resource planning and time capture are the foundation of effective project management—and the areas where Project Operations offers immediate benefit over legacy tools. Project Operations brings project planning, resource allocation, and time tracking together in a single application that connects delivery teams with finance and leadership.
Project managers can assign resources based on skills, availability, and utilization targets. Team members can record time and expenses directly against tasks, ensuring that data flows seamlessly into cost and billing calculations. With configurable approval workflows built using Microsoft Power Automate, organizations can align governance to their unique policies—such as multi-level approval chains or project-based exceptions—without needing costly customization or development.
For organizations that previously relied on custom scripts or spreadsheets to track resource data, this connected model reduces administrative overhead and gives leaders a single, accurate view of project health, capacity, and cost at any time.
Connected to finance, operations, and ERP systems
While Project Operations is not an ERP system, it is designed to interoperate seamlessly with ERP applications—including both Microsoft Dynamics 365 and third-party systems such as SAP, Oracle, and Workday. This flexibility is a core strength of Microsoft’s platform approach.
At the heart of this interoperability is Microsoft Dataverse, a secured and extensible data platform that connects applications across Microsoft and 3rd party ecosystems. Through Dataverse, organizations can share projects, customer, and financial data across systems while maintaining a single source of truth.
For customers already using Dynamics 365 Finance, the connection is native. Time entries, expenses, and project costs automatically flow into Finance for accounting, invoicing, and reporting, eliminating the need for custom middleware or manual data synchronization. This native link between operational execution and financial management gives leaders a unified view of profitability and compliance.
Customers using third-party ERP systems can achieve the same connected experience through Power Platform connectors and Azure integration services. Whether reporting data into a corporate data warehouse, syncing budgets to SAP, or sharing timesheet data with Workday, Project Operations offers the flexibility to operate within your existing ecosystem.
Configurable workflows through the Microsoft Power Platform
No two organizations manage projects in exactly the same way. Some rely on centralized PMOs with strict approval processes, while others empower teams to adapt workflows dynamically.
Project Operations supports both models through its foundation on the Microsoft Power Platform—giving organizations the ability to configure without code.
Using Power Apps, project leaders can design tailored forms and dashboards for project creation, approvals, or time capture. Power Automate enables process automation such as notifying managers when utilization thresholds are reached or routing expense reports for multi-level approval.
Meanwhile, Power BI brings advanced analytics to every role—from dashboards that track project margins and resource utilization to executive reports summarizing business performance across portfolios.
Because Project Operations shares the same platform as other Dynamics 365 applications, these extensions inherit enterprise-grade security, compliance, and data governance controls. The result is a solution that can evolve with your organization—without the risk or cost of maintaining custom code.
Purpose-built for project-based services
For professional services organizations—consulting firms, engineering companies, digital agencies, architecture practices, and IT service providers—Project Operations delivers a complete Professional Services Automation (PSA) solution. It aligns sales, delivery, and finance on a single connected platform, providing end-to-end visibility from proposal to profit.
Through integration with Dynamics 365 Sales, organizations can manage the sales lifecycle for their project-based opportunities. Teams can develop project quotations, proposals and estimates directly within Sales, convert them into active contracted engagements and projects, and continue through delivery, billing, and revenue recognition without switching systems. Key PSA capabilities include:
Project estimation and proposal management
Task planning and resource scheduling
Time and expense capture with mobile access
Budgeting, cost control, and forecasting
Invoicing and revenue recognition
Earned value tracking and performance analytics
Besides professional service organizations, Project Operations can also scale to other project-based verticals. This includes manufacturing, engineering, and construction. It also includes internal Portfolio planning and prioritization when used with key ISV and industry-specific partner solutions and extensions.
By connecting delivery with sales and finance, Project Operations ensures project engagements are executed profitably, billed accurately, and managed transparently. This level of integration is especially valuable for organizations that operate globally, where visibility into utilization, cost, and margin is critical for growth and compliance.
Ready for AI and continuous innovation
Project Operations is part of Microsoft’s broader service-centric application strategy. Therefore, it benefits from continuous innovation across Dynamics 365, the Power Platform, and Azure AI. Recent investments redefine project management execution—with intelligent agents, predictive analytics, and AI-assisted decision-making.
New agentic capabilities automate routine administrative tasks such as time, expense, and approval submissions, freeing teams to focus on higher-value work. AI-powered what-if scenario modeling helps project managers understand the financial or resource impact of potential changes before they happen. And change order management features ensure logging of adjustments to scope, schedule, or cost with an auditable history.
These capabilities bring Microsoft’s vision for agentic ERP and service management to life. Systems don’t just record data but actively assist in executing, analyzing, and improving business processes.
Because Project Operations is built within the Microsoft Cloud, it also benefits from shared capabilities such as Microsoft Copilot, Power BI advanced analytics, and Azure security and compliance frameworks. This ensures our customers gain access to new functionality as the platform evolves—without disruption or costly upgrades.
Built for how your business operates
Whether you’re managing internal projects, professional services engagements, or large-scale programs, Project Operations is designed to scale. Organizations can start small—managing schedules or work, tracking time and/or resources within Dataverse. They can then expand to include other processes, and AI-driven insights over time. Because the underlying processes and data are in a single place, experiences can be lit up without any large-scale data migrations. Usage patterns can then evolve to include other parts of project cycle across the organization.
This unified approach helps eliminate silos between teams and systems. It also ensures that data flows consistently—from project creation and scheduling to billing and performance reporting. It also positions organizations to take advantage of ongoing Microsoft innovation in automation, analytics, and AI.
Are you ready for what’s next?
The retirement of Project Online marks a turning point. Not just for project management at Microsoft, but for how organizations connect people, processes, and financials around project-based work. For leaders seeking to modernize their project operations environment, Microsoft Dynamics 365 Project Operations offers a clear path forward.
It combines the structure and scalability of enterprise-grade project management with the flexibility of the Power Platform and the intelligence of the Microsoft Cloud.
If your organization is evolving toward a service-centric model—where accurate time capture, resource optimization, and financial visibility drive success—Dynamics 365 Project Operations provides the foundation to unify your project lifecycle, connect your data, and prepare for the next generation of intelligent, AI-powered project delivery.
If a Professional Services Automation (PSA) solution that meets the needs of your service-centric organization is what you need, now is the time to evaluate Microsoft Dynamics 365 Project Operations. It’s more than a replacement for Project Online. It’s what’s next to drive project profitability from prospect to project to profit.
This article is contributed. See the original author and article here.
In today’s fast-paced digital world, customers expect more than just plain text when interacting with businesses. Traditional text-based conversations can be inefficient. This is especially true when customers need to exchange detailed information, explore options, or make quick decisions. That’s where rich messaging comes in.
Rich messaging introduces interactive elements, such as forms, carousels, and suggested replies, directly within the conversation. This enables businesses to create conversations that are not only more engaging but also visually intuitive. Subsequently, customers understand the choices faster and act with confidence.
Now, you can preview rich media messaging across both live chat and WhatsApp. With rich media messaging, businesses can deliver enhanced experiences on the channels customers use most. This reduces typing, speeds up resolution, and improves overall satisfaction for both customers and agents.
Rich media message types
While rich messaging is already available for Apple Messages for Business, forms, suggested replies, and cards are available in live chat and suggested replies are available in WhatsApp.
Forms are supported in live chatSuggested replies are supported in live chat and WhatsAppCards and carousels are supported in live chat
For scenarios where these options don’t fully meet a business’s live chat requirements, organizations can use Microsoft’s Adaptive Card technology to create fully customized JSON-based messages.
Key capabilities
One template, multiple channels
Create rich message templates once and use them across both live chat and WhatsApp. There’s no need to redesign for each channel.
Preview pane
Instantly preview how your rich media message will appear to customers while designing, ensuring accuracy and a great user experience.
Create messages for both WhatsApp and live chat in one template (left) and preview the rich media message design (right) in template designer
Seamless bot integration
Reuse certain rich media templates, such as live chat forms and WhatsApp suggested replies, directly in Copilot Studio—eliminating the need to recreate templates for bots.
Service reps can customize templates
Customer service representatives can easily customize admin-designed templates by editing fields before sending them to customers, enabling personalized interactions.
Customer service representative editing rich media message form on the right before sending to customer
Enhanced customer experience
Rich messages are more visually engaging and make it easier for customers to share relevant information. This boosts customer satisfaction and reduces resolution times.
The customer’s view of a live chat form
Get started today
To get started, navigate to the Copilot Service admin center, select Productivity in Support experience and then select Manage for Rich messages.Here you can start designing rich message templates for customer service representatives and bots.
This article is contributed. See the original author and article here.
Raising the bar for Enterprise AI
The Sales Research Agent in Dynamics 365 Sales automatically connects to live CRM data and can connect to additional data stored elsewhere, such as budgets and targets. It reasons over complex, customized schemas with deep domain expertise, and presents novel, decision-ready insights through text-based narratives and rich data visualizations tailored to the business question at hand.
For sales leaders, this means the ability to self-serve building out rich research journeys, spanning CRM and other domains, that previously took many people days or weeks to compile, with access to deeper insights enabled by the power of AI on pipeline, revenue attainment, and other critical topics.
But the market is crowded with offers that may or may not deliver acceptable levels of quality to support business decisions. How can business leaders know what’s truly enterprise ready? To help make sure customers do not have to rely on anecdotal evidence or “gut feel”, any vendor providing AI solutions must earn trust through clear, repeatable metrics that demonstrate quality, showing where the AI excels, where it needs improvement, and how it stacks up against alternatives.
Figure 1. The Sales Research Agent in the Dynamics 365 Sales Hub.
This post introduces the architecture and evaluation methodology and results behind Microsoft’s Sales Research Agent. Its technical innovations distinguish the Sales Research Agent from other available offerings, from multi-agent orchestration and multi-model support to advanced techniques for schema intelligence, self-correction and validation. In determining how best to evaluate the Sales Research Agent, Microsoft reviewed existing AI benchmarks and ultimately decided to create the Sales Research Bench, a new benchmark purpose-built to measure the quality of AI-powered Sales Research on business data, in alignment with the business questions, needs, and priorities of sales leaders. In head-to-head evaluations completed on October 19, 2025, the Sales Research Agent outperformed Claude Sonnet 4.5 by 13 points and ChatGPT-5 by 24.1 points on a 100-point scale.
Figure 2. Sales Research Bench Composite Score Results.
1Results: Results reflect testing completed on October 19, 2025, applying the Sales Research Bench methodology to evaluate Microsoft’s Sales Research Agent (part of Dynamics 365 Sales), ChatGPT by OpenAI using a ChatGPT Pro license with GPT-5 in Auto mode, and Claude Sonnet 4.5 by Anthropic using a Claude Max license.
Methodology and Evaluation dimensions: Sales Research Bench includes 200 business research questions relevant to sales leaders that were run on a sample customized data schema. Each AI solution was given access to the sample dataset using different access mechanisms that aligned with their architecture. Each AI solution was judged by LLM judges for the responses the solution generated to each business question, including text and data visualizations. We evaluated quality based on 8 dimensions, weighting each according to qualitative input from customers, what we have heard customers say they value most in AI tools for sales research: Text Groundedness (25%), Chart Groundedness (25%), Text Relevance (13%), Explainability (12%), Schema Accuracy (10%), Chart Relevance (5%), Chart Fit (5%), and Chart Clarity (5%). Each of these dimensions received a score from an LLM judge from 20 as the worst rating to 100 as the best. For example, the LLM judge would give a score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable or misleading. Text Groundedness and Text Relevance used Azure Foundry’s out-of-box LLM evaluators, while judging for the other six dimensions leveraged Open AI’s GPT 4.1 model with specific guidance. A total composite score was calculated as a weighted average from the 8 dimension-specific scores. More details on the methodology can be found in the rest of this blog.
Microsoft will continue to use the evals in Sales Research Bench to drive continuous improvement of the Sales Research Agent, and Microsoft intends to publish the full evaluation package in the coming months, so others can run it to verify published results or benchmark the agents they use (example evals from the benchmark are included in this paper).
Sales Research Agent architecture
The architecture of the Sales Research Agent sets it apart from other offerings, delivering both technical innovation and business value.
Multi-Agent Orchestration: The Sales Research Agent uses a dynamic multi-agent infrastructure that orchestrates the development of the research blueprints, the text-based narratives and data visualizations accompanied by an explanation of the agent’s work. Specialized agents are invoked at each step in the journey to deliver domain-optimized insights for user questions, taking organizational data as well as business and user context into account.
Multi-Model Support: This multi-agent infrastructure enables each specialized agent to use the model that is best suited to the task at hand. Microsoft tests how each specialized agent performs with different models. Models are easily swapped out to continue optimizing the Sales Research Agent’s quality as the models available evolve over time.
Support for Business Language: There is a difference between business language (how business users naturally communicate) and natural language (any language that is not code). The Sales Research Agent can give quality answers to prompts in business language, because it breaks down the prompt into multiple sub-questions, building a research plan and using multi-step reasoning over connected data sources. Additionally, the Sales Research Agent is infused with knowledge of the Sales domain, so it can correctly interpret terminology and context that is only implicit to the user’s prompt.
Schema Intelligence: The Sales Research Agent can handle both out-of-the-box and customized enterprise schemas, adapting to complex, real-world environments. It has sophisticated techniques and heuristics built in to recognize the tables and columns that are relevant to the user query.
Self-Correction and Validation: The Sales Research Agent incorporates advanced auto-correction mechanisms for its generated responses. Whether producing SQL or Python code, the agent leverages sophisticated code correctors capable of iterative refinement—reviewing, validating, and amending outputs as needed. The correction loop begins with a fast, non-reasoning model to identify and fix straightforward issues. If errors persist, the system escalates to a reasoning model and, if required, a more powerful model to ensure deeper contextual understanding and precise correction. This dynamic, multi-model process helps to ensure that the final code is both accurate and reliable, enhancing the overall quality and trustworthiness of the agent’s insights and recommendations.
Explainability: The system tracks every agent interaction and decision, as well as the SQL query and Python code generated to produce the research blueprint. The Sales Research Agent uses this information to help users quickly verify its accuracy and trace its reasoning. Each blueprint includes Show Work, an explanation in simple language for business users, with an advanced view of SQL queries and more details for technical users.
Figure 3. A high-level diagram of Sales Research Agent’s architecture and how it connects to business workflows
Why Enterprise Sales Requires a New Evaluation Framework
In traditional software, unit tests give repeatable proof that core behaviors work and keep working. For AI solutions, evaluations (evals) are needed to demonstrate quality and track continuous improvement over time.
Enterprises deserve evaluations that are purpose-built for their needs. While there is a wide range of pioneering work on AI evaluation, existing benchmarks miss key attributes that are needed for an AI solution to guide critical business decisions:
The benchmark must reflect the strategic, multi-faceted business questions of sales leaders using their business language.
The benchmark must measure schema accuracy: whether the system correctly handles tables, columns, and joins on system of record schemas that can be highly customized.
The benchmark should assess insights across both text-based narratives and data visualizations, reflecting the outputs with which leaders make decisions.
Introducing Sales Research Bench for AI-powered Sales Research
To meet these demands, Microsoft developed the Sales Research Bench, a composite quality score built to evaluate AI-powered Sales Research solutions in close alignment with customers’ actual questions, environments, and priorities. From engagements with customer sales teams across industries and geographies, Microsoft identified the critical dimensions of quality and created real-world business questions in the language sales leaders use. The data schema on which the evaluations take place is customized to reflect the complexities of customers’ enterprise environments, with their layered business logic and nuanced operational realities. The result is a rigorous benchmark presenting a composite score based on 8 weighted dimensions, as well as dimension-specific scores to reveal where agents excel or need improvement.
Benchmark Methodology
The evaluation infrastructure for Sales Research Bench includes:
Eval Datasets: 200 business questions in the language of sales leaders, each associated with its own set of ground-truth answers for validation.
Sample enterprise dataset: Eval questions run on a customized schema, reflecting the complexities of enterprise environments.
Evaluators: LLM-judge-based evaluation, tailored for each of the 8 quality dimensions described below. Azure Foundry out-of-box evaluators are used for Text Groundedness and Text Relevance. For the other 6 dimensions, OpenAI’s GPT 4.1 model is used with specific guidelines on how to score answers, which are provided in the appendix.
Here are 3 of the 200 evaluation questions informed by real sales leader questions:
Looking at closed opportunities, which sellers have the largest gap between Total Actual Sales and Est Value First Year in the ‘Corporate Offices’ Business Segment?
Are our sales efforts concentrated on specific industries or spread evenly across industries?
Compared to my headcount on paper (30), how many people are actually in seat and generating pipeline?
Dimensions of Quality
The Sales Research Bench aggregates eight dimensions of quality, weighting them as shown in the parentheses below to reflect what we have heard customers say they value most in AI tools for sales research during their engagements with Microsoft.
Text Groundedness (25%): Ensures narratives are accurate, faithful to the sample enterprise data, and applying correct business definitions.
Chart Groundedness (25%): Validates that charts accurately represent the underlying data from the same enterprise dataset.
Text Relevance (13%): Measures how relevant the insights in the text-based narrative are to the business question.
Explainability (12%): Ensures the AI solution accurately and clearly explains how it arrived at its responses.
Schema Accuracy (10%): Verifies the correct selection of tables and columns by evaluating whether the generated SQL query is consistent with the tables, joins, and columns in the ground-truth answers. (Business applications typically consist of approximately 1,000 tables, many featuring around 200 columns, all of which can be highly customized by customers.)
Chart Relevance (5%): Validates whether the data and analysis shown in the chart are relevant to the business question.
Chart Fit (5%): Evaluates if the chosen visualization matches the analytical need (e.g., line for trends, bar for comparisons).
Chart Clarity (5%): Assesses readability, labeling, accessibility, and chart hygiene.
Each of these dimensions received a score from an LLM judge from 20 as the worst rating to 100 as the best. For example, the LLM judge would give a score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable or misleading.
Sample Enterprise Dataset
Evaluation needs representative conditions to be useful. Through customer engagements, Microsoft identified numerous edge cases from highly customized schemas, complex joins and filters, and nuanced business logic (like pipeline coverage and attainment calculations).
For instance, most customers customize their schemas with custom tables and columns, such as replacing an industry column with an industry table, and linking it to the customer object, or adding market and business segment instead of using an existing segment field. As a result, their environments often contain both the out-of-box tables and columns as well as customized tables and fields, all with similar names. By systematically incorporating these edge cases into the sample custom schema, Sales Research Bench evaluates how agents perform outside of the “happy path” to assess enterprise readiness.
Figure 4. Example evaluation case (see the Appendix for more examples)
Evaluating Sales Research Agent and Other Solutions
In addition to the Sales Research Agent, Microsoft evaluated ChatGPT by OpenAI using a Pro license with GPT-5 in Auto mode and Claude Sonnet 4.5 by Anthropic using a Max license. The licenses were chosen to optimize for quality: ChatGPT’s pricing page describes Pro as “full access to the best of ChatGPT,” while Claude’s pricing page recommends Max to “get the most out of Claude.”[1] Similarly, ChatGPT’s evaluation was run using Auto mode, a setting that allows ChatGPT’s system to determine the best-suited model variant for each prompt.
Microsoft implemented a controlled evaluation environment where all systems – Sales Research Agent, ChatGPT-5, and ClaudeSonnet 4.5 worked with identical questions and data, but through different access mechanisms aligned with their respective architectures.
The Sales Research Agent has a native multi-agent orchestration layer that connects directly to Dynamics 365 Sales data. This allows it to autonomously discover schema relationships and entity dependencies, and to perform natural-language-to-query reasoning natively within its own orchestration stack.
Since ChatGPT and Claude do not support relational line-of-business source systems out of box, Microsoft enabled access to the same dataset by mirroring it into an Azure SQL instance. Mirroring was done to preserve all the data types, primary keys, foreign keys, and relationships between tables from Dataverse to Azure SQL. This Azure SQL copy was exposed through the MCP SQL connector, ensuring that ChatGPT and Claude retrieved the exact same data but through a standardized external interface. Once responses were captured, they were evaluated using the same evaluators against the exact same evaluation rubrics.
Finally, prompts to ChatGPT and Claude included instructions to create charts and to explain how they got to their answers (Sales Research Agent has this functionality out of box.)
In a test of 200 evals on the customized schema, Sales Research Agent earned a composite score of 78.2 on a 100-point scale, while Claude Sonnet 4.5 earned 65.2 and ChatGPT-5 earned 54.1.
The chart below presents the Sales Research Bench composite scores, with scores for each dimension overlaid on the bars within the stacked bar chart.
Figure 5. Sales Research Bench Composite Scores with Dimension-specific Scores.
Breaking this down, the Sales Research Agent outperformed other solutions on all 8 dimensions, with the biggest deltas in chart-related dimensions (groundedness, fit, clarity, and relevance), and the smallest deltas in schema accuracy and text groundedness. Claude Sonnet 4.5 outperformed ChatGPT-5 on all 8 dimensions, with the biggest delta in chart clarity and the smallest delta in chart relevance.
Figure 6. Sales Research Bench Scores by Dimension.
Looking Ahead
Sales Research Agent introduces a new generation AI-first business application that transforms how sales leaders can approach and solve complex business questions. The Sales Research Bench was created in parallel to represent a new standard for enterprise AI evaluation: Rigorous, comprehensive, and aligned with the needs and priorities of sales leaders.
Upcoming plans for the Sales Research Bench include using the benchmark for continuous improvement of the Sales Research Agent, running further comparisons against a wider range of competitive offerings, and publishing the eval package so customers can run it themselves to verify the published results and benchmark the agents they use. Evaluation is not a one-time event. Scores can be tracked across releases, ensuring that AI solutions evolve to meet customer needs.
Looking beyond Sales Research Bench, Microsoft plans to develop eval frameworks and benchmarks for more business functions and agentic solutions— in customer service, finance, and beyond. The goal is to set a new standard for trust and transparency in enterprise AI.
Appendix:
Scoring Guidelines provided to LLM Judges
Text Groundedness and Text Relevance used Azure Foundry’s out-of-box LLM evaluators. Below are the guidelines provided to the LLM judges for the other six quality dimensions. These judges leverage Open AI’s GPT 4.1 model.
Schema accuracy:
100: Perfect match – all golden tables and columns are present (extra columns OK, Dynamics equivalents OK)
80: Very good – minor missing columns or one missing table
60: Good – some important columns or tables missing but core schema is there
40: Fair – significant schema differences but some overlap
20: Poor – major schema mismatch or completely different tables
Explainability:
100 (Excellent): Explanation is highly detailed, perfectly describes what the generated SQL does, technically accurate, and provides clear business context
80 (Good): Explanation is sufficiently detailed and mostly accurate with minor gaps in describing the SQL operations
60 (Fair): Explanation provides adequate detail but misses some important SQL operations or has minor inaccuracies
40 (Poor): Explanation lacks sufficient detail to understand the SQL operations or has significant inaccuracies
20 (Very Poor): Explanation is too vague, mostly incorrect, or provides insufficient detail about the generated SQL
Chart Groundedness:
100: Data accurately matches ground truth OR both ground truth & chart empty
80: Minor data inaccuracies
60: Some data inaccuracies
40: Major data inaccuracies
20: data completely mismatches ground truth
Chart Relevance:
100: Question and chart strongly reinforce each other OR both ground truth & chart empty
60: Question and chart loosely align but with some disconnect
20: Question and chart do not align at all
Chart Fit:
100: Optimal chart choice for the task OR both ground truth & chart empty (appropriate emptiness)
60: Acceptable chart choice but not optimal for the task
20: inappropriate/confusing chart type
Chart Clarity:
100: Chart is crisp and well-labeled OR both ground truth & chart empty
60: Chart readable but missing labels/clarity elements
20: Chart unreadable, misleading
Examples of Evaluation dataset:
Below are some of the evaluation datasets that we have used to benchmark the performance of Sales Research Agent against all the evaluation rubrics mentioned above. These same questions were also evaluated against the competitive offerings.
Click on the + to see the full datasets.
Evaluation Dataset One
{ “question”: “Looking at closed opportunities, which sellers have the largest gap between Total Actual Sales and Est Value First Year in the ‘Corporate Offices’ Business Segment?””, “difficulty”: “hard”, “sql”: [ “SELECT su.[fullname] AS [seller_name],”, ” COUNT(*) AS [closed_deals],”, ” SUM(CAST(COALESCE(o.[sop_totalactualsales], o.[actualvalue_base]) AS DECIMAL(38,2))) AS [total_actual_sales],”, ” SUM(CAST(o.[sop_estvaluefirstyear_base] AS DECIMAL(38,2))) AS [total_est_value_first_year],”, ” SUM(CAST(COALESCE(o.[sop_totalactualsales], o.[actualvalue_base]) AS DECIMAL(38,2)))”, ” – SUM(CAST(o.[sop_estvaluefirstyear_base] AS DECIMAL(38,2))) AS [sales_gap]”, “FROM [dbo].[opportunity] AS o”, “JOIN [dbo].[systemuser] AS su ON CAST(o.[ownerid] AS NVARCHAR(36)) = CAST(su.[systemuserid] AS NVARCHAR(36))”, “JOIN [dbo].[sop_businesssegment] AS bs ON CAST(o.[sop_businesssegment] AS NVARCHAR(36)) = CAST(bs.[sop_businesssegmentid] AS NVARCHAR(36))”, “WHERE o.[statecodename] = ‘Won’ AND bs.[sop_name] = ‘Corporate Offices’ AND su.[fullname] ” AND o.[sop_estvaluefirstyear_base] IS NOT NULL”, “GROUP BY su.[fullname]”, “HAVING SUM(CAST(COALESCE(o.[sop_totalactualsales], o.[actualvalue_base]) AS DECIMAL(38,2))) IS NOT NULL”, “ORDER BY [sales_gap] DESC;” ], “tags”: [ “seller-performance”, “variance”, “actuals-vs-estimate” ], “ground_truth”: { “structured”: [ { “columns”: [ “seller_name”, “closed_deals”, “total_actual_sales”, “total_est_value_first_year”, “sales_gap” ], “rows”: [ [ “Jenny Chambers”, 3, 44501.69, 16010.15, 28491.54 ], [ “Heather Rogers”, 1, 21501.05, 4190.57, 17310.48 ], [ “Grace Rice”, 1, 21223.33, 6789.20, 14434.13 ], [ “Ann Rice”, 1, 3243.23, 7267.77, -4024.54 ] ] } ], “unstructuredtext”: “Largest positive gaps: Jenny Chambers (+$28.49K), Heather Rogers (+$17.31K), and Grace Rice (+$14.43K). Ann Rice under-shot estimate (−$4.02K).”, “evaluationNotes”: “Gap = Total Actual Sales − Est First Year; Corporate Offices segment only; closed (Won) opps.” } }
Evaluation Dataset Two
{ “question”: “Are our sales efforts concentrated on specific industries or spread evenly across industries?”, “difficulty”: “medium”, “sql”: [ “SELECT “, ” [sop_industry].[sop_name] AS [industry_name],”, ” COUNT([opportunity].[opportunityid]) AS [total_opportunity_count],”, ” COUNT(CASE “, ” WHEN [opportunity].[statecodename] NOT IN (‘Won’,’Lost’,’Canceled’) “, ” THEN 1 “, ” END) AS [open_opportunity_count]”, “FROM “, ” [opportunity]”, “INNER JOIN “, ” [account] ON CAST([opportunity].[parentaccountid] AS NVARCHAR(36)) = CAST([account].[accountid] AS NVARCHAR(36))”, “INNER JOIN “, ” [sop_industry] ON CAST([account].[sop_industry] AS NVARCHAR(36)) = CAST([sop_industry].[sop_industryid] AS NVARCHAR(36))”, “GROUP BY “, ” [sop_industry].[sop_name]”, “HAVING “, ” COUNT([opportunity].[opportunityid]) > 0″, “ORDER BY “, ” [open_opportunity_count] DESC;” ], “tags”: [ “industry”, “concentration”, “open-vs-total” ], “ground_truth”: { “structured”: [ { “columns”: [ “industry_name”, “total_opportunity_count”, “open_opportunity_count” ], “rows”: [ [ “Legal Services”, 1352, 240 ], [ “Insurance”, 1210, 212 ], [ “Non-Durable Merchandise Retail”, 946, 177 ], [ “Inbound Repair and Services”, 695, 126 ], [ “Outbound Consumer Service”, 740, 124 ], [ “Design, Direction and Creative Management”, 719, 119 ], [ “Building Supply Retail”, 633, 118 ], [ “Durable Manufacturing”, 569, 111 ], [ “Business Services”, 597, 108 ], [ “Broadcasting Printing and Publishing”, 597, 104 ], [ “Accounting”, 551, 104 ], [ “Distributors, Dispatchers and Processors”, 562, 104 ], [ “Financial”, 606, 102 ], [ “Consulting”, 532, 100 ], [ “Agriculture and Non-petrol Natural Resource Extraction”, 586, 95 ], [ “Doctor’s Offices and Clinics”, 497, 90 ], [ “Brokers”, 579, 90 ], [ “Food and Tobacco Processing”, 489, 86 ], [ “Consumer Services”, 451, 81 ], [ “Eating and Drinking Places”, 448, 76 ], [ “Equipment Rental and Leasing”, 425, 74 ], [ “Entertainment Retail”, 429, 73 ], [ “Inbound Capital Intensive Processing”, 419, 71 ] ] } ], “unstructuredtext”: “Effort is broad but skewed: Legal Services and Insurance have the most total opps, while several industries maintain 70–120 open opps.”, “evaluationNotes”: “Counts total vs open opps per industry; ordered by open count. } },
Evaluation Dataset Three
{ “question”: “Compared to my headcount on paper (30), how many people are actually in seat and generating pipeline?”, “difficulty”: “medium”, “sql”: [ “WITH open_opps AS (“, ” SELECT o.*”, ” FROM opportunity o”, ” WHERE o.statecodename NOT IN (‘Won’,’Lost’,’Canceled’)”, “)”, “SELECT”, ” CAST(30 AS INT) AS headcount_on_paper,”, ” COUNT(DISTINCT open_opps.ownerid) AS active_pipeline_users,”, ” (30 – COUNT(DISTINCT open_opps.ownerid)) AS delta_needed,”, ” (SELECT COUNT(*) FROM opportunity) AS total_opportunities,”, ” (SELECT COUNT(*) FROM open_opps) AS open_opportunities,”, ” (SELECT SUM(CAST(o2.estimatedvalue_base AS DECIMAL(38,2))) FROM open_opps o2) AS open_pipeline_value;” ], “tags”: [ “capacity”, “headcount”, “pipeline” ], “ground_truth”: { “structured”: [ { “columns”: [ “headcount_on_paper”, “active_pipeline_users”, “delta_needed”, “total_opportunities”, “open_opportunities”, “open_pipeline_value” ], “rows”: [ [ 30, 7, 23, 14860, 2662, 16047760.29 ] ] } ], “unstructuredtext”: “Only 7 sellers have active pipeline against a plan of 30 (shortfall of 23). Open pipeline totals $16.05M across 2,662 opps.”, “evaluationNotes”: “Active sellers counted as distinct owners on current pipeline.” } }
This article is contributed. See the original author and article here.
In today’s hyper-competitive business landscape, sales leaders face a relentless challenge: how to drive growth, outpace competitors, and make smarter decisions faster in a resource constrained environment. Thankfully, the promise of AI in sales is no longer theoretical. With the advent of agentic solutions embedded in Microsoft Dynamics 365 Sales, including the Sales Research Agent, organizations are witnessing a transformation in how business decisions are made, and teams are empowered. But how do you know if these breakthrough technologies have reached a level of quality where you can trust them to support business-critical decisions?
Today, I’m excited to share an update on the Sales Research Agent, in public preview as of October 1, as well as a new evaluation benchmark, the Microsoft Sales Research Bench, created to assess how AI solutions respond to the strategic, multi-faceted questions that sales leaders have about their business and operational performance. We intend to publish the full evaluation package behind the Sales Research Bench in the coming months so that others can run these evals on different AI solutions themselves.
The New Frontier: AI Research Agents in Sales
Sales Research Agentin Dynamics 365 Sales empowers business leaders to explore complex business questions through natural language conversations with their data. It leverages a multi-modal, multi-model, and multi-agent architecture to reason over intricate, customized schemas with deep sales domain expertise. The agent delivers novel, decision-ready insights through narrative explanations and rich visualizations tailored to the specific business context.
For sales leaders, this means the ability to self-serve on real-time trustworthy analysis, spanning CRM and other domains, which previously took many people days or weeks to compile, with access to deeper insights enabled by the power of AI on pipeline, revenue attainment, and other critical topics.
Image: Screenshot of the Sales Research Agent in Dynamics 365 Sales
“As a product manager in the sales domain, balancing deep data analysis with timely insights is a constant challenge. The pace of changing market dynamics demands a new way to think about go-to-market tactics. With the Sales Research Agent, we’re excited to bridge the gap between traditional and time-intensive reporting and real-time, AI-assisted analysis — complementing our existing tools and setting a new standard for understanding sales data.“
Kris Kuty, EY LLP Clients & Industries — Digital Engagement, Account, and Sales Excellence Lead
What makes the Sales Research Agent so unique?
Its turnkey experience goes beyond the standard AI chat interface to provide a complete user experience with text narratives and data visualizations tailored for business research and compatible with a sales leader’s natural business language.
As part of Dynamics 365 Sales, it automatically connects to your CRM data and applies schema intelligence to your customizations, with the deep understanding of your business logic and the sales domain that you’d expect a business application to have.
Its multi-agent, multi-model architecture enables the Sales Research Agent to build out a dedicated research plan and to delegate each task to specialized agents, using the model best suited for the task at hand.
Before the agent shares its business assessment and analysis, it critiques its work for quality. If the output does not meet the agent’s own quality bar, it will revise its work.
The agent explains how it arrived at its answers using simple language for business users and showing SQL queries for technical users, enabling customers to quickly verify its accuracy.
Why Verifiable Quality Matters
Seemingly every day a new AI tool shows up. The market is crowded with offers that may or may not deliver acceptable levels of quality to support business decisions. How do you know what’s truly enterprise ready? To help make sure business leaders do not have to rely on anecdotal evidence or “gut feel”, any vendor providing AI solutions needs to earn trust through clear, repeatable metrics that demonstrate quality, showing where the AI excels, where it needs improvement, and how it stacks up against alternatives.
While there is a wide range of pioneering work on AI evaluation, enterprises deserve benchmarks that are purpose-built for their needs. Existing benchmarks don’t reflect 1) the strategic, multi-faceted questions of sales leaders using their natural business language; 2) the importance of schema accuracy; or 3) the value of quality across text and visualizations. That is why we are introducing the Sales Research Bench.
Introducing Sales Research Bench: The Benchmark for AI-powered Sales Research
Inspired by groundbreaking work in AI Benchmarks such as TBFact and RadFact, Microsoft developed the Sales Research Bench to assess how AI solutions respond to the business research questions that sales leaders have about their business data.1
Read this blog postfor a detailed explanation of the Sales Research Bench methodology as well as the Sales Research Agent’s architecture.
This benchmark is based on our customers’ real-life experiences and priorities. From engagements with customer sales teams across industries and around the world, Microsoft created 200 real-world business questions in the language sales leaders use and identified 8 critical dimensions of quality spanning accuracy, relevance, clarity, and explainability. The data schema on which the evaluations take place is customized to reflect the complexities of our customers’ enterprise environments, with their layered business logic and nuanced operational realities.
To illustrate, here are 3 of our 200 evaluation questions informed by real sales leader questions:
Looking at closed opportunities, which sellers have the largest gap between Total Actual Sales and Est Value First Year in the ‘Corporate Offices’ Business Segment?
Are our sales efforts concentrated on specific industries or spread evenly across industries?
Compared to my headcount on paper (30), how many people are actually in seat and generating pipeline?
Judging is handled by LLM evaluators that rate an AI solution’s responses (text and data visualizations) against each quality dimension on a 100-point scale based on specific guidelines (e.g., give score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable, misleading). These dimension-specific scores are then weighted to produce a composite quality score, with the weights defined based on qualitative input from customers, what we have heard customers say they value most. The result is a rigorous benchmark presenting a composite score and dimension-specific scores to reveal where agents excel or need improvement.[2]
[2] Sales Research Bench uses Azure Foundry’s out-of-box LLM evaluators for the dimensions of Text Groundedness and Text Relevance. The other 6 dimensions each have a custom LLM evaluator that leverages Open AI’s GPT 4.1 model. 100-pt scale has 100 as the highest score with 20 as the lowest. More details on the benchmark methodology are provided here
Running Sales Research Bench on AI solutions
Here’s how we applied the Sales Research Bench to run evaluations on the Sales Research Agent, ChatGPT by OpenAI, and Claude by Anthropic.
License: Microsoft evaluated ChatGPT by OpenAI using a Pro license with GPT-5 in Auto mode and Claude Sonnet 4.5 by Anthropic using a Max license. The licenses were chosen to optimize for quality: ChatGPT’s pricing page describes Pro as “full access to the best of ChatGPT,” while Claude’s pricing page recommends Max to “get the most out of Claude.”3 Similarly, ChatGPT’s evaluation was run using Auto mode, a setting that allows ChatGPT’s system to determine the best-suited model variant for each prompt.
Questions: All agents were given the same 200 business questions.
Instructions: ChatGPT and Claude were given explicit instructions to create charts and to explain how they got to their answers. (Equivalent instructions are included in the Sales Research Agent out of box.)
Data: ChatGPT and Claude accessed the sample dataset in an Azure SQL instance exposed through the MCP SQL connector. The Sales Research Agent connects to the sample dataset in Dynamics 365 Sales out of box.
Results are in: Sales Research Agent vs. alternative offerings
In head-to-head evaluations completed on October 19, 2025 using the Sales Research Bench framework, the Sales Research Agent outperformed Claude Sonnet 4.5 by 13 points and ChatGPT-5 by 24.1 points on a 100-point scale.
Image: Sales Research Agent – Evaluation Results
1Results: Results reflect testing completed on October 19, 2025, applying the Sales Research Bench methodology to evaluate Microsoft’s Sales Research Agent (part of Dynamics 365 Sales), ChatGPT by OpenAI using a ChatGPT Pro license with GPT-5 in Auto mode, and Claude Sonnet 4.5 by Anthropic using a Claude Max license.
Methodology and Evaluation dimensions: Sales Research Bench includes 200 business research questions relevant to sales leaders that were run on a sample customized data schema. Each AI solution was given access to the sample dataset using different access mechanisms that aligned with their architecture. Each AI solution was judged by LLM judges for the responses the solution generated to each business question, including text and data visualizations.
We evaluated quality based on 8 dimensions, weighting each according to qualitative input from customers, what we have heard customers say they value most in AI tools for sales research: Text Groundedness (25%), Chart Groundedness (25%), Text Relevance (13%), Explainability (12%), Schema Accuracy (10%), Chart Relevance (5%), Chart Fit (5%), and Chart Clarity (5%). Each of these dimensions received a score from an LLM judge from 20 as the worst rating to 100 as the best. For example, the LLM judge would give a score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable or misleading. Text Groundedness and Text Relevance used Azure Foundry’s out-of-box LLM evaluators, while judging for the other six dimensions leveraged Open AI’s GPT 4.1 model with specific guidance. A total composite score was calculated as a weighted average from the 8 dimension-specific scores. More details on the methodology can be found in this blog.
The Sales Research Agent outperformed these solutions on each of the 8 quality dimensions.
Image: Evaluation Scores for Each of the Eight Dimensions
The Road Ahead: Investing in Benchmarks
Upcoming plans for the Sales Research Bench include using the benchmark for continuous improvement of the Sales Research Agent, running comparisons against a wider range of competitive offerings, and publishing the full evaluation package including all 200 questions and the sample dataset in the coming months, so that others can run it themselves to verify the published results and benchmark the agents they use. Evaluation is not a one-time event. Scores can be tracked across releases, domains, and datasets, driving targeted quality improvements and ensuring the AI evolves with your business.
Sales Research Bench is just the beginning. Microsoft plans to develop eval frameworks and benchmarks for more business functions and agentic solutions—in customer service, finance, and beyond. The goal is to set a new standard for trust and transparency in enterprise AI.
Why This Matters for Sales Leaders
For business decision makers, the implications are profound:
Accelerated Decision-Making: AI-driven insights you can trust, when delivered in real time, enable faster, more confident decisions
Continuous Improvement: Thanks to evals, developers can quickly identify areas for highest measurable impact and focus improvement efforts there
Trust and Transparency: Rigorous evaluation means you can rely on the outputs, knowing they’ve been tested against the scenarios that matter most to your business.
The future of sales is agentic, data-driven, and relentlessly focused on quality. With Microsoft’s Sales Research Agent and the Sales Research Bench evaluation framework, sales leaders can move beyond hype and make decisions grounded in demonstration of quality. It’s not just about having the smartest AI—it’s about having a trustworthy partner for your business transformation.
This article is contributed. See the original author and article here.
Business leaders are facing a new reality. AI and agents are transforming traditional systems of record into systems of action, becoming applications that not only store data but use it to drive decisions and outcomes.
In this new model, the user experience becomes almost invisible. What matters most is the foundation: structured data, clear governance, and business logic that allows agents to operate effectively.
These are agentic business applications. They can help teams scale up capacity, lower operational costs, grow topline revenue, and surface key insights on an ongoing basis for smarter, faster decisions.
But technology alone isn’t enough. Business transformation requires functional leaders to align processes with these new capabilities. That means rethinking how work gets done. Agents can operate in the background, continuously monitoring, analyzing, and acting. They surface insights and take action, helping leaders stay focused on outcomes.
Early adopters—what we call Frontier Firms—are building the right foundations now. They are investing in agentic customer relationship management (CRM), enterprise resource management (ERP), and contact center solutions (CCaaS), as well as rethinking how to align business processes with agents. They realize there must be a fundamental shift in how work gets done.
Microsoft agentic business applications: Toolkit for the frontier
To help organizations move to the Frontier, Microsoft offers a suite of agentic business applications with Dynamics 365—bringing enterprise-grade AI and Microsoft Copilot experiences across CRM, ERP, and CCaaS. Organizations can extend Dynamics 365 with Microsoft Power Platform and Microsoft Copilot Studio to build custom AI-powered applications and agents tailored to unique business needs.
At the core of every agentic business application there are three components:
Agents that transform business processes.
Copilot that empowers every employee to maximize productivity.
A unified, secure data platform that connects insights across the enterprise.
Let’s take a look at each of the components of the stack.
Expanding Dynamics 365 agents in key business functions
Over the last year, we have launched more than a dozen business process agents in Dynamics 365, giving organizations a starting point to transform sales, service, finance, and supply chain. We’re continuing to expand our agent portfolio to deliver proactive and growth-oriented outcomes.
In Dynamics 365 Sales, the new Sales Close Agent (in public preview beginning October 25, 2025) help sellers prioritize high-value opportunities, identify and mitigate risks for deals in pipeline proactively, and close simple transactions—accelerating deal velocity and improving win rates.
Also in Dynamics 365 Sales, agents are moving to public preview and general availability, including Sales Research Agent (public preview began on October 1, 2025) and Sales Qualification Agent (with general availability beginning October 25, 2025).
In Dynamics 365 Customer Service and Dynamics 365 Contact Center, the new Quality Evaluation Agent (general availability beginning October 24, 2025) gives supervisors and service teams a real-time pulse on service quality across both human and AI-led interactions. Unlike traditional, manual approaches that review a small fraction of engagements, this agent uses the speed and scale of AI to evaluate the majority of cases and conversations, uncover actionable insights, and assess AI-handled interactions. It monitors quality metrics, detect anomalies, and initiate corrective actions—enabling broader, faster, and more consistent quality management.
In addition, service agents moving to general availability beginning October 24, 2025, include: Case Management Agent in Dynamics 365 Customer Service and Customer Knowledge Management Agent, and Customer Intent Agent in Dynamics 365 Customer Service and Contact Center. In Dynamics 365 Field Service, Scheduling Operations Agent, in public preview, keeps schedules agile and service running smoothly.
“By adopting agents in Dynamics 365 service solutions, we’re making every interaction faster and more empathetic. In a service where demand exceeds capacity, this can be a game changer.
Agents help gather information, route contacts based on need, and streamline resolution—enabling counselors to focus on direct support to young people.
In our fundraising unit, we’re also exploring how agents can manage inbound calls to reduce abandonment rates from 20 to 30% to under 5%—directly lifting revenue streams that fund vital services.”
—Helen Vahdat, Chief Information Officer, yourtown (Kids Helpline)
In our ERP portfolio, customers can use Account Reconciliation Agent in Dynamics 365 Finance and the Supplier Communications Agent in Dynamics 365 Supply Chain Management to complete reconciliation faster and process inbound supplier emails autonomously.
“The Account Reconciliation Agent pilot sharpened our team’s understanding of AI in practice and paved the way for a confident move toward the Supplier Communication Agent where we see clear potential to drive efficiency and enhance collaboration.”
—Wolfgang Bauer, ERP Team Lead, Haas Baumanagement GmbH
Additionally, customers can access Sales Order Agent and Payables Agent in Dynamics 365 Business Central and Time and Expense Agent and Activity ApprovalsAgent in Dynamics 365 Project Operations.
To further support organizations on their journey to the frontier, we’re making it easier to get started with agents. Beginning in late November 2025, Dynamics 365 Premium SKUs—including Dynamics 365 Sales Premium, Customer Service Premium, Supply Chain Management Premium, and Finance Premium—will include 1,000 Copilot Credits per user, per month, pooled at the tenant level. New and existing customers can use these credits to run agents in the scenarios most meaningful to their business. When the included capacity is exhausted, customers can add more capacity with additional Copilot Credits as needed.
Benchmarks—The Sales Research Bench
As organizations begin using agents to transform core processes, the next priority is ensuring these solutions deliver measurable value so that leaders can make confident high-impact decisions. Microsoft is meeting this need through benchmarks that provide a standardized evaluation framework to continuously measure quality of output from AI solutions. The most recent example is the Sales Research Bench, which uses a 100-point scale to measure what we have heard from sales leaders that matters most to them: accuracy, relevance, clarity, and transparency. More specifically, the Sales Research Bench evaluates how AI solutions generate text and data visualizations in response to the strategic, multi-faceted questions that sales leaders have about their business data.
The Sales Research Bench runs 200 business research questions typical of enterprise sales leaders on a sample customized data schema that reflects the complexities of enterprise environments. It assesses performance across 8 quality dimensions with scoring by large language models (Azure Foundry out-of-box evaluators for two dimensions and OpenAI’s GPT 4.1 model with specific instructions for the other six dimensions). Dimension-specific scores are weighted to create a composite quality score.
In evaluations executed by Microsoft using the Sales Research Bench framework, the Sales Research Agent in Dynamics 365 outperforms both ChatGPT-5 and Claude Sonnet 4.5. More details on the benchmark methodology and results are available here. We intend to publish the full evaluation package including the 200 benchmark questions and sample dataset in the coming months, so others can run these evaluations themselves.
With this approach, we’re creating purpose-built agent benchmarks aligned to the priorities of business leaders. Our intent is to demonstrate a new standard for trust and transparency, providing clear insight into the quality and performance of agents in a specific business function. We also plan to publish agent performance regularly to reduce friction and help leaders make confident, data-driven decisions.
Results: Results reflect testing completed on October 19, 2025, applying the Sales Research Bench methodology to evaluate Microsoft’s Sales Research Agent (part of Dynamics 365 Sales), ChatGPT by OpenAI using a ChatGPT Pro license with GPT-5 in Auto mode, and Claude Sonnet 4.5 by Anthropic using a Claude Max license.1
Empowering everyone with Microsoft Copilot
The next critical layer in agentic transformation is Microsoft Copilot, which is embedded across Dynamics 365 enhancing sales, customer service, and finance. By automating routine tasks, such as summarizing key opportunities, drafting email responses to customer queries, and predicting and acting on supply chain disruptions, Microsoft Copilot frees employees to focus on strategic work to drive more impact.
With Copilot in Dynamics 365 Sales, sellers can spend less time in their CRM, and more time nurturing customer relationships. For example, Copilot can provide quick summaries of sales opportunities and leads, meeting preparations, and account-related news.
Grand & Toy uses Copilot’s real-time insights, dashboards, and time-saving features like chat summarization, email creation, and sentiment analysis to deliver exceptional customer service.
Connecting businesses on a unified, trusted platform
Lastly, there is the data layer—the foundation of agentic transformation. When unified, it can connect every interaction, insight, and action. With integration between Dynamics 365 and Microsoft 365, organizations can unify data and workflows, so teams can stay focused and make faster decisions.
Built on Microsoft Dataverse, Dynamics 365 agents deliver real-time insights across departments like sales, service, and finance without silos and enabling faster and more collaborative decision-making.
Banco PAN is a strong example of this transformation, using Dataverse as a core part of their Dynamics 365 solution to enable real-time integration across systems.
“Our operators now have immediate access to the customer’s history and can resolve issues more quickly.”
—Tulio Prado, Service Superintendent at Banco PAN
Dynamics 365 seamlessly connects with Power Platform and Copilot Studio, creating a unified foundation for apps, agents, and AI. This deep integration empowers everyone—not just professional developers—to build, customize, and deploy intelligent solutions that adapt to business needs. By bringing low-code innovation and enterprise-grade security together, organizations can streamline processes and workflows, reduce costs, and unlock new ways to work smarter.
Explore more
With today’s business applications varying widely in capability and impact, organizations face critical choices. Agentic business applications are the path forward. Discover how leading companies are moving on that path with Dynamics 365, beyond static systems of record to intelligent systems of action to drive real-time insights, automation, and growth.
Tune into the Business Applications Launch Event streaming October 23, 2025 on YouTube to see real-world solutions built on Microsoft agentic business applications.
Join us at Microsoft Ignite 2025 in San Francisco, California from November 18 to 21, 2025. Connect with industry leaders, explore hands-on demos, and be there to get the latest product announcements. Attend Innovation Sessions that delve deeper into how agentic business applications are reshaping the future of work and actionable strategies for leadership.
1Methodology and Evaluation dimensions: Sales Research Bench includes 200 business research questions relevant to sales leaders that were run on a sample customized data schema. Each AI solution was given access to the sample dataset using different access mechanisms that aligned with their architecture. Each AI solution was judged by large language model judges for the responses the solution generated to each business question, including text and data visualizations. We evaluated quality based on 8 dimensions, weighting each according to qualitative input from customers, what we have heard customers say they value most in AI tools for sales research: Text Groundedness (25%), Chart Groundedness (25%), Text Relevance (13%), Explainability (12%), Schema Accuracy (10%), Chart Relevance (5%), Chart Fit (5%), and Chart Clarity (5%). Each of these dimensions received a score from a large language model judge from 20 as the worst rating to 100 as the best. For example, the large language model judge would give a score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable or misleading. Text Groundedness and Text Relevance used Azure Foundry’s out-of-box large language model evaluators, while judging for the other six dimensions leveraged Open AI’s GPT 4.1 model with specific guidance. A total composite score was calculated as a weighted average from the 8 dimension-specific scores. More details on the methodology can be found in this blog: The Sales Research Agent and Sales Research Bench.
Recent Comments