Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected

Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected

This article is contributed. See the original author and article here.

Microsoft Sovereign Cloud’s expansion of capabilities includes Azure Local disconnected operations, Microsoft 365 Local disconnected, and Microsoft Foundry addition of large model and modern infrastructure capabilities.

The post Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Announcing End of Support for Dynamics 365 Project Service Automation (PSA) on the U.S. Government cloud

Announcing End of Support for Dynamics 365 Project Service Automation (PSA) on the U.S. Government cloud

This article is contributed. See the original author and article here.

On March 19, 2024, we announced that support for Dynamics 365 Project Service Automation on the commercial cloud will end on March 31, 2025. As planned, end of support went into effect on that date. 

Today we are announcing the end of support for  Dynamics 365 Project Service Automation on the U.S. Government cloud (GCC) beginning March 31, 2027. Dynamics 365 Project Operations has been available in U.S. Government cloud (GCC) since December 2025 

Beginning March 31, 2027, Microsoft will no longer support PSA on GCC environments. There will not be any feature enhancements, updates, bug fixes, or other updates to this offering. Any support ticket logged for the PSA application on GCC will be closed with instructions to upgrade to Project Operations.    

We strongly encourage all PSA customers on GCC environments to start planning your upgrade process as soon as possible so you can take advantage of many new Project Operations features such as:   

  • Integration with Planner capabilities on Dataverse with many new advanced scheduling features  
  • Project Budgeting and Time-phased forecasting    
  • Date Effective price overrides   
  • Revision and Activation on Quotes     
  • Material usage recording in projects and tasks   
  • Subcontract Management   
  • Advances and Retained-based contracts   
  • Contract not-to-exceed   
  • Task and Progress based billing   
  • Multi-Customer contracts   
  • AI and Copilot based experiences.   

 For Project Service Automation customers on GCC High or DoD, we will have a future announcement regarding the availability of Dynamics 365 Project Operations. 

    Learn more about Dynamics 365 Project Operations  

    Project Operations was first released in October 2020 as a comprehensive product to manage Projects from inception to close by bringing together the strengths of Dataverse, Microsoft Dynamics 365 Finance, Microsoft Dynamics 365 Supply Chain Management, and Microsoft Planner. 

    Want to learn more about Project Operations? Check this link and navigate to our detailed documentation!   

    Want to try Project Operations? Click here and sign up for a 30-day trial!    

    The post Announcing End of Support for Dynamics 365 Project Service Automation (PSA) on the U.S. Government cloud appeared first on Microsoft Dynamics 365 Blog.

    Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

    Microsoft Sovereign Cloud adds governance, productivity, and support for large AI models securely running even when completely disconnected

    The ultimate Microsoft 365 community event returns—your front‑row seat to the future of intelligent work

    This article is contributed. See the original author and article here.

    This event is your front-row seat to everything new and next across Microsoft 365—with hundreds of opportunities to learn directly from product makers and connect with the best community in tech.

    The post The ultimate Microsoft 365 community event returns—your front‑row seat to the future of intelligent work appeared first on Microsoft 365 Blog.

    Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

    Announcing End of Support for Dynamics 365 Project Service Automation (PSA) on the U.S. Government cloud

    General Availability of Quality Evaluation Agent’s conversation capabilities 

    This article is contributed. See the original author and article here.

    Quality Evaluation Agent in Dynamics 365 Customer Service and Dynamics 365 Contact Center is an AI-led evaluation framework that empowers teams to deliver consistent, scalable quality oversight and automate quality evaluations across customer interactions. 

    Beginning February 6, QEA conversation capabilities become generally available, joining case evaluation as a GA feature, as previously announced in this blog. This milestone expands QEA’s coverage and impact across customer support scenarios. 

    Looking Forward: 

    QEA continues to evolve with key upcoming enhancements across the evaluation framework. This includes multilanguage support, criteria versioning, the ability to flag critical questions, simulation capabilities, knowledge source adherence, and more. 

    Get started today by enabling QEA in your Dynamics 365 Customer Service and Dynamics 365 Contact Center environment.   

    Learn more  

    Watch a quick  video introduction.  

    For configuration steps, feature updates, and best practices, see  Manage Quality Evaluation Agent | Microsoft Learn  

    The post General Availability of Quality Evaluation Agent’s conversation capabilities  appeared first on Microsoft Dynamics 365 Blog.

    Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

    Announcing End of Support for Dynamics 365 Project Service Automation (PSA) on the U.S. Government cloud

    The Key Facets of AI Evaluation in the Contact Center 

    This article is contributed. See the original author and article here.

    As AI becomes central to contact center operations–powering every customer engagement channel–evaluation is no longer a back-office technical exercise. Evaluation is a critical business capability directly impacting customer experience, operational effectiveness, and business outcome 

    But the evaluation is not one-dimensional. Organizations must think about when, how, by whom, and on what data AI systems are evaluated. This blog explores the key facets of AI evaluation and how they apply specifically to contact center environments. 

     1: Development-Stage vs. Production-Stage Evaluation 

    Definition: Evaluation at development time refers to tests and assessments performed during the AI software creation phase. Production time evaluation occurs after deployment, when the AI software is live and serving real users. 

    Implications: Development-time evaluation provides a controlled environment to identify and address issues before AI reaches customers. This helps reduce risk and prevent costly failures. However, it cannot fully capture real-world complexity. Production-time evaluation reflects actual customer behavior and operating conditions. While it offers critical insight into true performance and experience, manage it carefully to avoid customer impact when issues surface. 

    How contact centers should think about this (example): 
    • Use development-time evaluation to prevent issues. These include incorrect intent detection, poor prompt behavior, broken escalation flows, non-compliant responses, or unacceptable latency before they ever reach customers. 
    • Use production-time evaluation to detect and measure real customer impact, like drops in containment, rising transfers to human agents, customer frustration, regional or channel-specific issues, and performance degradation caused by real traffic patterns. 

     2: Manual vs. Automated Evaluation 

    Definition: Manual execution involves running evaluation tasks at human command. Automated evaluation is run at predetermined time or based on triggers.  

    Implications: Manual evaluation brings human judgment, context, and nuance that automation alone cannot capture, making it especially valuable when evaluation needs are unpredictable, environmental changes are not captured by automated triggers, or when automated runs would be too costly. Automated evaluation complements this, providing consistent, scalable coverage as AI systems evolve, reliably reevaulating systems after known changes, such as releases or configuration updates through CI/CD pipelines.  

    How contact centers should think about this: 

    • Use automation for baseline quality, regression, and continuous monitoring 
    • Use manual evaluation for exceptions, deep dives, and human judgment 
    • The best strategy combines both. Automation ensures assessment of critical changes, while human evaluators interpret results, investigate anomalies, and adapt to unexpected conditions. 

     3: Evaluations Run by the Platform vs. by Customers 

    Definition: Evaluations can be conducted internally by the developer organization (such as Microsoft) or externally by customers using the software in their own environments.  

    Implications: Developer-run and customer-run evaluations each provide distinct and necessary value. Internal evaluations establish a consistent baseline for quality, safety, and compliance. Customer-led evaluations surface real-world behaviors, operational constraints, and usage patterns that cannot be fully anticipated during development. Relying on only one limits visibility and can leave gaps in reliability or usability. 

    How contact centers should think about this: 

    • Rely on platform evaluations to establish a trusted baseline. This ensures core capabilities—such as accuracy, safety and compliance, latency, escalation behavior, and failure handling—meet enterprise standards before features are rolled out broadly. 
    • Platform providers should partner closely with customers. This enables them to run their own evaluations and deeply understand AI performance within their specific domains, workflows, and operating environments. This collaboration helps surface both expected and edge-case behaviors—across positive and negative scenarios 

     4: Synthetic Data vs. Production Traffic 

    Definition: Synthetic data refers to artificially generated datasets designed to simulate specific scenarios. Production traffic comprises actual user interactions and data generated during live operation. 

    Implications: Data Fidelity and Risk: Synthetic data enables safe, repeatable evaluation without exposing sensitive information or impacting real users. However, it may lack the complexity and unpredictability of production data. Production traffic delivers high-fidelity insights but carries risks of data leakage, performance degradation, or user impact. Relevance: Synthetic data is valuable for early-stage, edge-case, or privacy-sensitive evaluations. Production traffic is essential for verifying AI system behavior under real-world conditions. 

    How contact centers should think about this: 

    • Begin with synthetic data to evaluate safely and iterate quickly, especially when testing new scenarios, edge cases, or changes. 
    • Leverage production data to validate performance at scale, ensuring AI behaves as expected under real customer traffic and operating conditions. 
    • Treat production evaluation as a continuous monitoring and learning loop, focused on measuring impact and improving quality—rather than experimenting on live customers. 

     5: Evaluation After vs. During Execution 

    Definition: Post-execution evaluation analyzes the results after a process or test run finishes, while in-execution (real-time) evaluation monitors and assesses behavior as it unfolds.  

    Implications: post-execution evaluation enables deep analysis and long-term improvement, while in-execution evaluation allows faster detection and mitigation of issues. Using both helps contact centers balance insight with real-time protection of customer experience. 

    How contact centers should think about this: 

    • Post-conversation evaluation can provide a large amount of information about correctness, groundedness, resolution effectiveness across completed AI interactions.  
    • Real-time evaluation of empathy and sentiment enables timely intervention, such as escalating to a human agent or allowing supervisor guidance during the interaction 

    Together, these approaches form a core part of AI evaluation in the contact center, helping organizations balance deep analysis with real‑time protections.

    Final Thoughts: A Modern Evaluation Mindset 

    There is no single “right” way to evaluate AI systems. Instead, evaluation should be viewed as a multi-dimensional strategy that evolves alongside your AI systems. 

    By thoughtfully strategizing across evaluation dimensions, organizations can build AI systems that are not only intelligent, but also trustworthy, resilient, and customer-first. Evaluation is no longer optional- it is how modern organizations ensure AI delivers on its promise, every day. 

    Get more details:  

    Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center 

    Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score 

    The post The Key Facets of AI Evaluation in the Contact Center  appeared first on Microsoft Dynamics 365 Blog.

    Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.