This article is contributed. See the original author and article here.
Introduction:
The start of the new year has brought a wave of exciting enhancements to the Demand Planning module in Dynamics 365 Supply Chain Management. We’re thrilled to introduce you to five groundbreaking features that will redefine the way you approach demand planning. In this blog post, we’ll look into each feature, highlighting their benefits and showcasing live demos hosted by the expert, Anders Girke.
Feature 1: Edit on Total Level
The new feature in our January release is the revolutionary “Edit on Total Level” functionality. This empowers planners to expedite their planning workflows through effective edits on a broader scale. Let’s swiftly explore the advantages:
✨ Edit on Total Level: Accelerate planning with efficient edits on a larger scale.
? Date Filters: Navigate and analyze data effortlessly.
? Distribute Proportional Over Time: Streamline workflows with proportional changes.
? Allocate Proportional Amongst Dimensions: Optimize precision in planning.
The second feature in our January release series is “Filter in Transformation.” This powerful tool allows precise data transformation for enhanced what-if analysis and forecasting on a focused dataset. Here are the key benefits:
? Perform What-if forecasts on a filtered sub-set of data
? Filter staging data prior to transformation
? Ensure secure performance
? Experiment with Dimensions to refine your planning
Witness the possibilities unfold as you perform What-if forecasts, filter staging data, ensure secure performance, and experiment with dimensions to refine your planning. Your demand planning just got a whole lot smarter!
The third installment of our January release series introduces “Comments.” This feature is set to transform collaboration and communication within the demand planning application. Key highlights include:
? Enhanced Communication: Provide detailed explanations for changes, fostering transparency.
? Real-time Collaboration: Facilitate consensus-building among team members.
Feature 4: System Administrator Role for Demand Planning
In this release, we introduce the pivotal role of the System Administrator for Demand Planning. This role is responsible for installing the app, assigning roles, managing teams, and overseeing critical operations. Highlights include:
? Role Level Access for Contributors: Empower limited users with the ability to view shared worksheets, create personalized views, and edit data within their permissions.
? Row Level Access Rules: Define conditions for specific tables, columns, and operators for unparalleled flexibility.
? Editing Demand Plans with Flexibility: Highlighting the power of role level access, added experience, and disaggregation in editing demand plans.
Get a sneak peek into the upcoming February release, emphasizing the balance between limiting filters for optimal performance and ensuring an exceptional user experience.
In conclusion, the recent January release of Dynamics 365 Supply Chain Management Demand Planning has brought forth a wave of transformative features, including “Edit on Total Level,” “Filter in Transformation,” and “Comments,” redefining the landscape for planners with tools that enhance efficiency and collaboration. The incorporation of the System Administrator role, Role Level Access for Contributors, Row Level Access Rules, and advanced security features positions the platform as a robust and secure solution for demand planning needs. With increased flexibility in editing demand plans and promising additions in the upcoming February release, Dynamics 365 is shaping a future of more streamlined and user-friendly demand planning experiences. This release marks a substantial leap forward, promising organizations worldwide a future characterized by smarter and more precise demand planning. As we embrace this evolution in demand planning, Dynamics 365 Supply Chain Management stands as a pioneer, leading the way with innovative features. Stay tuned for ongoing updates and enhancements that will continuously elevate your planning processes to unprecedented heights!
?North America Demand Planning Workshop?
Join us at the forthcoming Demand Planning Workshop, hosted at Microsoft’s state-of-the-art facility – Microsoft in Redmond, WA (98052). This event is tailored to introduce the innovative Demand Planning application to both our valued Customers and Partners.
This article is contributed. See the original author and article here.
We are updating our Microsoft Copilot product line-up with a new Copilot Pro subscription for individuals; expanding Copilot for Microsoft 365 availability to small and medium-sized businesses; and announcing no seat minimum for commercial plans.
This article is contributed. See the original author and article here.
Starting January 19, 2024, Microsoft Copilot in Dynamics 365 Customer Service will be automatically installed and enabled in your Dynamics 365 Customer Service environment. This update will install the case summarization and conversation summarization features. These features are available to all users with a Dynamics 365 Customer Service Enterprise license, and/or digital messaging or Voice add-on license for conversation summary enablement.
If your organization has already enabled Copilot in Customer Service, there will be no change to your environment.
Key dates
Disclosure date: December 2023 Administrators received a notification about the change in the Microsoft 365 admin center and Power Platform admin center.
Installation date: January 19 – February 2, 2024 Copilot in Customer Service is installed and enabled by default.
Please note that specific dates for messages and auto-installation will vary based on the geography of your organization. The date applicable to your organization is in the messages in Microsoft 365 admin center and Power Platform admin center. Copilot auto-installation will occur only if your organization is in a geography where all Copilot data handling occurs “in geo.” These regions are currently Australia, United Kingdom, and United States. Organizations where Copilot data handling does not occur “in geo” must opt in to cross-geo data transmission to receive these capabilities.
What is Copilot in Dynamics 365 Customer Service?
Copilot in Customer Service is a key part of the Dynamics 365 Customer Service experience. Copilot provides real-time, AI-powered assistance to help customer support agents solve issues faster. By relieving them from mundane tasks such as searching and note-taking, Copilot gives them time for more high-value interactions with customers. Contact center managers can also use Copilot analytics to view Copilot usage and better understand how it impacts the business.
Why is Microsoft deploying this update?
We believe this update presents a significant opportunity to fundamentally alter the way your organization approaches service by quickly improving and enhancing the agent experience. The feedback we have received from customers who are already using Copilot has been overwhelmingly positive. Generative AI-based service capabilities have a profound impact on efficiency and customer experience, leading to improved customer satisfaction. This update applies only to the Copilot summarization capabilities, which integrate with service workflows and require minimal change management.
Learn more about Copilot in Dynamics 365 Customer Service
This article is contributed. See the original author and article here.
Problem:
===========
Assume that you have tables with Identity columns declared as datatype INT and you are using Auto Identity management for those articles in a Merge Publication.
This Publication has one or more subscribers and you tried to re-initialize one subscriber using a new Snapshot.
Merge agent fails with this error:
>> Source: Merge Replication Provider
Number: -2147199417
Message: The Publisher failed to allocate a new set of identity ranges for the subscription. This can occur when a Publisher or a republishing Subscriber has run out of identity ranges to allocate to its own Subscribers or when an identity column data type does not support an additional identity range allocation. If a republishing Subscriber has run out of identity ranges, synchronize the republishing Subscriber to obtain more identity ranges before restarting the synchronization. If a Publisher runs out of identit
Cause:
============
Identity range Merge agent is trying to allocate, exceeds maximum value an INT datatype can have.
Resolution
=================
Assume that publisher database has only one Merge publication with 2 subscribers, and your merge articles have this definition:
As you see from above diff_pub_range_end_max_used column is zero for tblCity.
When Merge agent runs depending on how many servers are involved it has to allocate 2 ranges for each.
In the example above we have Publisher and 2 subscribers and @identity_range is 1000. So, we will have to allocate range for 3 servers i.e., 3 * (2*1000) = 6000
Our diff_pub_range_end_max_used should be greater than 6000, only then we will be able to allocate a new range for all the servers.
To resolve the issue.
Remove tblCity table from publication.
Change the datatype from int to bigint and add this table back to publication.
Then generate a new snapshot. It will generate snapshots for all articles, but only this 1 table will be added back to the existing Subscribers.
This article is contributed. See the original author and article here.
TL;DR: This post navigates the intricate world of AI model upgrades, with a spotlight on Azure OpenAI’s embedding models like text-embedding-ada-002. We emphasize the critical importance of consistent model versioning ensuring accuracy and validity in AI applications. The post also addresses the challenges and strategies essential for effectively managing model upgrades, focusing on compatibility and performance testing.
Introduction
What are Embeddings?
Embeddings in machine learning are more than just data transformations. They are the cornerstone of how AI interprets the nuances of language, context, and semantics. By converting text into numerical vectors, embeddings allow AI models to measure similarities and differences in meaning, paving the way for advanced applications in various fields.
Importance of Embeddings
In the complex world of data science and machine learning, embeddings are crucial for handling intricate data types like natural language and images. They transform these data into structured, vectorized forms, making them more manageable for computational analysis. This transformation isn’t just about simplifying data; it’s about retaining and emphasizing the essential features and relationships in the original data, which are vital for precise analysis and decision-making.
Embeddings significantly enhance data processing efficiency. They allow algorithms to swiftly navigate through large datasets, identifying patterns and nuances that are difficult to detect in raw data. This is particularly transformative in natural language processing, where comprehending context, sentiment, and semantic meaning is complex. By streamlining these tasks, embeddings enable deeper, more sophisticated analysis, thus boosting the effectiveness of machine learning models.
Implications of Model Version Mismatches in Embeddings
Lets discuss the potential impacts and challenges that arise when different versions of embedding models are used within the same domain, specifically focusing on Azure OpenAI embeddings. When embeddings generated by one version of a model are applied or compared with data processed by a different version, various issues can arise. These issues are not only technical but also have practical implications on the efficiency, accuracy, and overall performance of AI-driven applications.
Compatibility and Consistency Issues
Vector Space Misalignment: Different versions of embedding models might organize their vector spaces differently. This misalignment can lead to inaccurate comparisons or analyses when embeddings from different model versions are used together.
Semantic Drift: Over time, models might be trained on new data or with updated techniques, causing shifts in how they interpret and represent language (semantic drift). This drift can cause inconsistencies when integrating new embeddings with those generated by older versions.
Impact on Performance
Reduced Accuracy: Inaccuracies in semantic understanding or context interpretation can occur when different model versions process the same text, leading to reduced accuracy in tasks like search, recommendation, or sentiment analysis.
Inefficiency in Data Processing: Mismatches in model versions can require additional computational resources to reconcile or adjust the differing embeddings, leading to inefficiencies in data processing and increased operational costs.
Best Practices for Upgrading Embedding Models
Upgrading Embedding – Overview
Now lets move to the process of upgrading an embedding model, focusing on the steps you should take before making a change, important questions to consider, and key areas for testing.
Pre-Upgrade Considerations
Assessing the Need for Upgrade:
Why is the upgrade necessary?
What specific improvements or new features does the new model version offer?
How will these changes impact the current system or process?
Understanding Model Changes:
What are the major differences between the current and new model versions?
How might these differences affect data processing and results?
Data Backup and Version Control:
Ensure that current data and model versions are backed up.
Implement version control to maintain a record of changes.
Questions to Ask Before Upgrading
Compatibility with Existing Systems:
Is the new model version compatible with existing data formats and infrastructure?
What adjustments, if any, will be needed to integrate the new model?
Cost-Benefit Analysis:
What are the anticipated costs (monetary, time, resources) of the upgrade?
How do these costs compare to the expected benefits?
Long-Term Support and Updates:
Does the new model version have a roadmap for future updates and support?
How will these future changes impact the system?
Key Areas for Testing
Performance Testing:
Test the new model version for performance improvements or regressions.
Compare accuracy, speed, and resource usage against the current version.
Compatibility Testing:
Ensure that the new model works seamlessly with existing data and systems.
Test for any integration issues or data format mismatches.
Fallback Strategies:
Develop and test fallback strategies in case the new model does not perform as expected.
Ensure the ability to revert to the previous model version if necessary.
Post-Upgrade Best Practices
Monitoring and Evaluation:
Continuously monitor the system’s performance post-upgrade.
Evaluate whether the upgrade meets the anticipated goals and objectives.
Feedback Loop:
Establish a feedback loop to collect user and system performance data.
Use this data to make informed decisions about future upgrades or changes.
Upgrading Embedding – Conclusion
Upgrading an embedding model involves careful consideration, planning, and testing. By following these guidelines, customers can ensure a smooth transition to the new model version, minimizing potential risks and maximizing the benefits of the upgrade.
Use Cases in Azure OpenAI and Beyond
Embedding can significantly enhance the performance of various AI applications by enabling more efficient data handling and processing. Here’s a list of use cases where embeddings can be effectively utilized:
Enhanced Document Retrieval and Analysis: By first performing embeddings on paragraphs or sections of documents, you can store these vector representations in a vector database. This allows for rapid retrieval of semantically similar sections, streamlining the process of analyzing large volumes of text. When integrated with models like GPT, this method can reduce the computational load and improve the efficiency of generating relevant responses or insights.
Semantic Search in Large Datasets: Embeddings can transform vast datasets into searchable vector spaces. In applications like eCommerce or content platforms, this can significantly improve search functionality, allowing users to find products or content based not just on keywords, but on the underlying semantic meaning of their queries.
Recommendation Systems: In recommendation engines, embeddings can be used to understand user preferences and content characteristics. By embedding user profiles and product or content descriptions, systems can more accurately match users with recommendations that are relevant to their interests and past behavior.
Sentiment Analysis and Customer Feedback Interpretation: Embeddings can process customer reviews or feedback by capturing the sentiment and nuanced meanings within the text. This provides businesses with deeper insights into customer sentiment, enabling them to tailor their services or products more effectively.
Language Translation and Localization: Embeddings can enhance machine translation services by understanding the context and nuances of different languages. This is particularly useful in translating idiomatic expressions or culturally specific references, thereby improving the accuracy and relevancy of translations.
Automated Content Moderation: By using embeddings to understand the context and nuance of user-generated content, AI models can more effectively identify and filter out inappropriate or harmful content, maintaining a safe and positive environment on digital platforms.
Personalized Chatbots and Virtual Assistants: Embeddings can be used to improve the understanding of user queries by virtual assistants or chatbots, leading to more accurate and contextually appropriate responses, thus enhancing user experience. With similar logic they could help route natural language to specific APIs. See CompactVectorSearch repository, as an example.
Predictive Analytics in Healthcare: In healthcare data analysis, embeddings can help in interpreting patient data, medical notes, and research papers to predict trends, treatment outcomes, and patient needs more accurately.
In all these use cases, the key advantage of using embeddings is their ability to process and interpret large and complex datasets more efficiently. This not only improves the performance of AI applications but also reduces the computational resources required, especially for high-cost models like GPT. This approach can lead to significant improvements in both the effectiveness and efficiency of AI-driven systems.
Specific Considerations for Azure OpenAI
Model Update Frequency: Understanding how frequently Azure OpenAI updates its models and the nature of these updates (e.g., major vs. minor changes) is crucial.
Backward Compatibility: Assessing whether newer versions of Azure OpenAI’s embedding models maintain backward compatibility with previous versions is key to managing version mismatches.
Version-Specific Features: Identifying features or improvements specific to certain versions of the model helps in understanding the potential impact of using mixed-version embeddings.
Strategies for Mitigation
Version Control in Data Storage: Implementing strict version control for stored embeddings ensures that data remains consistent and compatible with the model version used for its generation.
Compatibility Layers: Developing compatibility layers or conversion tools to adapt older embeddings to newer model formats can help mitigate the effects of version differences.
Baseline Tests: Create few simple baseline tests, that would identify any drift of the embeddings.
Azure OpenAI Model Versioning: Understanding the Process
Azure OpenAI provides a systematic approach to model versioning, applicable to models liketext-embedding-ada-002:
Regular Model Releases:
New models are released periodically with improvements and new features.
Model version mismatches in embeddings, particularly in the context of Azure OpenAI, pose significant challenges that can impact the effectiveness of AI applications. Understanding these challenges and implementing strategies to mitigate their effects is crucial for maintaining the integrity and efficiency of AI-driven systems.
References
“Learn about Azure OpenAI Model Version Upgrades.” Microsoft Tech Community.Link
Recent Comments