This article is contributed. See the original author and article here.
Introduction
In fast-paced, complex supply chain environments, ensuring product quality throughout the journey from supplier to customer is more critical than ever. We’re excited to address this with a powerful new feature in Microsoft Dynamics 365 Supply Chain Management Landed Cost module, enabling quality control for goods in-transit orders.
Addressing a Critical Gap in Supply Chain Management
Traditionally, quality control measures in supply chain management focus on initial stages of production and receipt of purchase orders at their final destination. We see a growing need for more comprehensive quality assurance processes covering all phases, including the in-transit phase. Now, businesses can conduct quality checks on goods while they are in transit. This new feature ensures product integrity throughout the entire supply chain journey.
How it works
The quality control for goods in-transit feature is seamlessly integrated into the Dynamics 365 SCM framework. Here’s how it enhances the supply chain process:
Setup of Goods in transit order in Quality associations: Businesses can now define goods in-transit order as a new quality association type with pre-defined event blocking approach. This proactive measure ensures any potential quality issues can be identified and addressed before the goods reach their final destination.
Automatic Quality order creation: During the Goods in-transit order registration/receive operation, depends on the previous step’s configuration, the corresponding quality order will automatically create to reflect the quality control.
Order control and release: Depending on the configuration, the quality order completeness will block the downstream business operation if it’s not passed. This control makes it easy for businesses to adopt and implement without significant changes to their current quality control processes for Goods in-transit order.
Benefits of Quality Control for Goods In-Transit
While the enhanced return receiving process in Dynamics 365 SCM represents a significant leap forward, transparency is key. We have multiple planned backlogs coming soon, such as:
Implementing quality control for goods in-transit offers several significant advantages:
Enhanced Supply Chain Reliability: By ensuring quality at every stage, businesses can significantly reduce the risk of receiving defective or non-compliant goods.
Cost Efficiency: Early detection of quality issues minimizes the need for costly rework or returns, leading to substantial cost savings.
Regulatory Compliance: The feature supports compliance with various regulatory standards, ensuring that products meet all necessary legal requirements.
Improved Customer Satisfaction: Delivering high-quality products consistently enhances customer trust and satisfaction, ultimately driving business growth.
Conclusion
The introduction of quality control for goods in-transit orders in Microsoft Dynamics 365 SCM represents a significant advancement in supply chain management. It empowers businesses to ensure product quality at every stage of the supply chain, from production to final delivery. By adopting this feature, companies can enhance their supply chain integrity, reduce costs, comply with regulatory standards, and deliver superior products to their customers.
Stay tuned for more updates as we continue to innovate and expand the capabilities of Microsoft Dynamics 365 SCM to meet the evolving needs of the global supply chain.
This article is contributed. See the original author and article here.
Kick off your journey with SharePoint Embedded. At the SharePoint Embedded for Enterprise Apps events, you’ll explore best practices for your projects, glimpse the future of SharePoint Embedded, and learn to integrate Copilot into document-centric apps. We’re eager for your feedback and experiences; your creations shape ours.
The SharePoint Embedded product team is coming to New York Cityand London in September! Come join us for an all-day event to learn how SharePoint Embedded can deliver Copilot, Collaboration, Compliance, and Core Enterprise Storage for your document centric apps. Specifically, you’ll have the opportunity to do the following:
Learn about SharePoint Embedded, a new way to build file and document centric apps.
Get hands-on coding experience with this new technology and learn how to build your own custom app.
Take a deep dive into critical features, like compliance, collaboration and copilot.
Hear from others who have implemented SharePoint Embedded solutions.
Get insight into the SharePoint Embedded roadmap
New York City, US Date: Thursday, September 12th, 9AM-7PM (times are approximate, including social hour) Where: Microsoft Offices NYC Times Square London, UK Date:Thursday, September 26th, 9AM-7PM (times are approximate, including social hour) Where: Central London, UK (Exact location TBD) RSVP Details (Please note that this event is only open to certain countries and the following will not be accepted: Russia, Belarus)
21+, free event, no registration fees
First come, first served (limited seats)
1 RSVP = 1 person
NDA required (if your company does not have an NDA on record, one will be sent)
NDA must be signed to attend event
Event will be IN PERSON ONLY and will not be recorded
Bring your own device for coding portions (tablets and smartphones will not work)
This article is contributed. See the original author and article here.
Leaving the schedule board today can be cumbersome because you have to re-enter your preferred settings every time you come back. You may also find it frustrating that your admin has the power to override your choices and reset the board to the default settings. Wouldn’t it be nice if you could save your personal preferences and have them ready when you need them? The schedule board now boasts improved navigation patterns to help YOU manage your schedules more efficiently!
Remember my board
The schedule board now works with your computer’s local cache to reload with the last accessed parameters as chosen by you, no configuration necessary! That means you can leave the schedule board to check on your resources, update requirements, or even grab a hot cuppa, all while your board stays the way you left it.
The cache will save and reload the following parameters automatically:
Last accessed tab: Save time by not having to reload the tab its relevant resources and bookings
Map panel open/closed: Map remains in the state that you left it in
Viewtype: Gantt/list view – Schedule board
Viewmode: hourly/daily/weekly
BoardStartDate: Continue with the last accessed date range, resets to today’s date after 15min
Columnwidth: zoom level of the board stays the way you want it
Many of our users have told us about their struggles trying to return to today’s date when switching between date ranges. We’ve thus added a new “Today” button next to the date range control, that helps you quickly return to today’s date range, wherever you may be.
Shareable links, as Easy as 1 2 3
What if you want to share your settings with others or add a bookmark of your settings to your browser? We’ve added a new one-click button that helps you generate a URL link that captures all the following schedule board parameters:
Last accessed tab
Map panel open/closed
Viewtype: Gantt/list view
Viewmode: hourly/daily/weekly
Columnwidth: zoom level of the board
Saving and sharing your favorite board setup has never been easier!
Step 1: Click on the “…” more button at the top right of the schedule board
Step 2: Click on “Copy link” button
Step 3: The generated link has been saved to your clipboard.
The use cases are numerous, for example:
Add the copied link to a bookmark in your browser. Whenever you click on this bookmarked link, the browser will launch the board with your preferred parameters
Share the link with your colleagues/team to share a setup that works for you, and teach them optimize their workflow
This article is contributed. See the original author and article here.
This month, we have launched a redesigned Microsoft Purview eDiscovery product experience in public preview. This improved user experience revolutionizes your data search, review and export tasks within eDiscovery. Our new user-friendly and feature-rich eDiscovery experience is not just about finding and preserving data, it’s about doing it with unprecedented efficiency and ease. The modern user experience of eDiscovery addresses some long-standing customer requests, such as enhanced search capabilities with MessageID, Sensitive Information Types (SITs) and sensitivity labels. It also introduces innovative features like draft query with Copilot and search using audit log. These changes, driven by customer feedback and our commitment to innovation, offer tangible value by saving time and reducing costs in the eDiscovery process.
The new eDiscovery experience is exclusively available in the Microsoft Purview portal. The new Microsoft Purview portal is a unified platform that streamlines data governance, data security, and data compliance across your entire data estate. It offers a more intuitive experience, allowing users to easily navigate and manage their compliance needs.
Unified experience
One of the benefits of the new improved eDiscovery offers a unified, consistent, and intuitive experience across different licensing tiers. Whether your license includes eDiscovery standard or premium, you can use the same workflow to create cases, conduct searches, apply holds, and export data. This simplifies the training and education process for organizations that upgrade their license and want to access premium eDiscovery features. Unlike the previous experience, where Content Search, eDiscovery (Standard), and eDiscovery (Premium) had different workflows and behaviors, the new experience lets you access eDiscovery capabilities seamlessly regardless of your license level. E5 license holders have the option to use premium features such as exporting cloud attachments and Teams conversation threading at the appropriate steps in the workflow. Moreover, users still have access to all existing Content Searches and both Standard and Premium eDiscovery cases on the unified eDiscovery case list page in the Microsoft Purview portal.
The new experience also strengthens the security controls for Content Search by placing them in an eDiscovery case. This allows eDiscovery administrators to control who can access and use existing Content Searches and generated exports. Administrators can add or remove users from the Content Search case as needed. This way, they can prevent unauthorized access to sensitive search data and stop Content Search when it is no longer required. Moreover, this helps maintain the integrity and confidentiality of the investigation process. The new security controls ensure that only authorized personnel can access sensitive data, reducing the risk of data breaches and complying with legal and regulatory standards.
Enhanced data source management
Efficient litigation and investigation workflows hinge on the ability to precisely select data sources and locations in the eDiscovery process. This enables legal teams to swiftly preserve relevant information and minimize the risk of missing critical evidence. The improved data source picking capability allows for a more targeted and effective search, which is essential in responding to legal matters or internal investigations. It enables users to apply holds and conduct searches with greater accuracy, ensuring that all pertinent information is captured without unnecessary data proliferation. This improvement not only enhances the quality of the review, but also reduces the overall costs associated with data storage and management.
The new eDiscovery experience makes data source location mapping and management better as well. You can now perform a user or group search with different identifiers and see their data hierarchy tree, including their mailbox and OneDrive. For example, eDiscovery users can use any of following identifiers: Name, user principal name (UPN), SMTP address, or OneDrive URL. The data source picker streamlines the eDiscovery workflow by displaying all potential matches and their locations, along with related sources such as frequent collaborators, group memberships, and direct reports. This allows for the addition of these sources to search or hold scope without relying on external teams for information on collaboration patterns, Teams/Group memberships, or organizational hierarchies.
Figure 1: New data source view with ability to associate person’s mailbox and OneDrive, exploring to a person’s frequent collaborator and ability to query data source updates.
The “sync” capability in the new data source management flow is a significant addition that ensures eDiscovery users are always informed about the latest changes in data locations. With this feature, users can now query whether a specific data source has newly provisioned data locations or if any have been removed. For example, if a private channel is created for a Teams group, this feature alerts eDiscovery users to the new site’s existence, allowing them to quickly and easily include it in their search scope, ensuring no new data slips through the cracks. This real-time update capability empowers users to make informed decisions about including or excluding additional data locations in their investigations. This capability ensures that their eDiscovery process remains accurate and up-to-date with the latest data landscape changes. It is a proactive approach to data management that enhances the efficiency and effectiveness of eDiscovery operations, providing users with the agility to adapt to changes swiftly.
Improved integration with Microsoft Information Protection
The new eDiscovery experience now supports querying by Sensitive Information Types (SITs) and sensitivity labels. Labeling, classifying, and encrypting your organization’s data is a best practice that serves multiple essential purposes. It helps to ensure that sensitive information is handled appropriately, reducing the risk of unauthorized access and data breaches. By classifying data, organizations can apply the right level of protection to different types of information, which is crucial for compliance with various regulations and standards. Moreover, encryption adds a layer of security that keeps data safe even if it falls into the wrong hands. It ensures that only authorized users can access and read the information, protecting it from external threats and internal leaks.
The new eDiscovery search functionality supports searches for emails and documents classified by SITs or specific sensitivity labels, facilitating the collection and review of data aligned with its classification for thorough investigations. This capability compresses the volume of evidence required for review, significantly reducing both the time and cost of the process. The support of efficient document location and management by targeting specific sensitivity labels unlocks the ability for organizations to validate and understand how sensitivity labels are utilized. This is exemplified by the ability to conduct collections across locations or the entire tenant for a particular label, using the review set to assess label application. Additionally, combining this with SIT searches helps verify correct data classification. For example, it ensures that all credit card data is appropriately labeled as highly confidential by reviewing items containing credit card data that are not marked as such, thereby streamlining compliance and adherence to security policies.
Figure 2: Better integration with Microsoft Information Protection means the ability to search labeled and protected data by SIT and sensitivity label.Figure 3: Better integration with Microsoft Information Protection means the ability to search labeled and protected data by SIT and sensitivity label.
Enhanced investigation capabilities
The new eDiscovery experience introduces a powerful capability to expedite security investigations, particularly in scenarios involving a potentially compromised account. By leveraging the ability to search by audit log, investigators can swiftly assess the account’s activities, pinpointing impacted files. As part of the investigative feature, eDiscovery search can also make use of evidence file as search input. It enables a rapid analysis of file content patterns or signatures. This feature is crucial for identifying similar or related content, providing a streamlined approach to discover if sensitive files have been copied or moved, thereby enhancing the efficiency and effectiveness of the security response.
The enhanced search capability by identifier in the new eDiscovery UX is a game-changer for customers, offering a direct route to the exact message or file needed. With the ability to search using a messageID for mailbox items or a path for SharePoint items, users can quickly locate and retrieve the specific item they require. This precision not only streamlines evidence collection but also accelerates the process of purging leaked data for spillage cleanup. It’s a significant time-saver that simplifies the workflow, allowing customers to focus on what matters most – securing and managing their digital environment efficiently, while targeting relevant data.
Building on the data spillage scenario, our search and purge tool for mailbox items, including Teams messages, also received a significant 10x enhancement. Where previously administrators could only purge 10 items per mailbox location, they can now purge up to 100 items per mailbox location. This enhancement is a benefit for administrators tasked with responding to data spills or needing to remediate data within Teams or Exchange, allowing for a more comprehensive and efficient purge process. With all these investigative capability updates, now the security operations team is ready to embrace the expanded functionality and take their eDiscovery operations to the next level.
Microsoft Security Copilot capabilities
The recently released Microsoft Security Copilot’s capabilities in eDiscovery are transformative, particularly in generating KeyQL from natural language and providing contextual summarization and answering abilities in review sets. These features significantly lower the learning curve for KeyQL, enabling users to construct complex queries with ease. Instead of mastering the intricacies of KeyQL, users can simply describe what they are looking for using natural language, and Copilot translates that into a precise KeyQL statement. This not only saves time but also makes the power of eDiscovery accessible to a broader range of users, regardless of their technical expertise.
Figure 4: Draft query faster with Copilot’s N2KeyQL capability.
Moreover, Copilot’s summarization skills streamline the review process by distilling key insights from extensive datasets. Users can quickly grasp the essence of large volumes of data, which accelerates the review process and aids in identifying the most pertinent information. This is particularly beneficial in legal and compliance contexts, where time is often of the essence, and the ability to rapidly process and understand information can have significant implications.
Figure 5: Copilot summarization skill in Review Set helps reviewer review content by assessing summary of the item – even when the conversation is in not in English.
Additional export options
The new eDiscovery experience introduces a highly anticipated suite of export setting enhancements. The contextual conversation setting is now distinct from the conversation transcript setting, offering greater flexibility in how Teams conversations are exported. The ability to export into a single PST allows for the consolidation of files/items from multiple locations, simplifying the post-export workflow. Export can now give friendly names to each item, eliminating the need for users to decipher item GUIDs, and making identification straightforward. Truncation in export addresses the challenges of zip file path character limits. Additionally, the expanded versioning options empower users to include all versions or select the latest 10 or 100, providing tailored control over the data. These improvements not only meet user expectations but also significantly benefit customers by streamlining the eDiscovery process and enhancing overall efficiency.
Additional enhancements
As part of the new experience, we are introducing the review set query report, which generates a hit-by-term report based on a KQL query. This query report allows users to quickly see the count and volume of items hit on a particular keyword or a list of compound queries, and can be optionally downloaded. By providing a detailed breakdown of where and how often each term appears, it streamlines the review by focusing on the most relevant documents, reducing the volume of data that needs to be manually reviewed, and offers a better understanding of which terms may be too broad or too narrow.
As part of the improved user experience, all long-running processes now show a transparent and informative progress bar. This progress bar provides users with real-time visibility into the status of their searches and exports, allowing eDiscovery practitioners to better plan their workflow and manage their time effectively. This feature is particularly beneficial in the context of legal investigations, where timing is often critical, and users need to anticipate when they can proceed to the next steps. This level of process transparency allows users to stay informed and make decisions accordingly.
Figure 6: Transparent progress bar for all long-running processes detailing scope of the process and estimated time to complete.
In addition to progress transparency, all processes in the new eDiscovery experience will include a full reportdetailing the information related to completed processes. The defensibility of eDiscovery cases and investigations is paramount. The full reporting capabilities for processes such as exports, searches, and holds provide critical transparency. For example, it allows for a comprehensive audit of what was searched or exported, the specific timing, and the settings used. For customers, this means a significant increase in trust and defensibility of the eDiscovery process. This enhancement not only bolsters the integrity of the eDiscovery process but also reinforces the commitment to delivering customer-centric solutions that meet the rigorous demands of legal compliance and data management.
Hold policy detail view also received an upgrade as part of this new eDiscovery release. Customers now can access the hold policy view with detailed information on all locations and their respective hold status. This detailed view is instrumental in providing a transparent audit of what location is on hold, ensuring that all relevant data is preserved, and that no inadvertent destruction of evidence occurs during the process. Customers can download and analyze the full detailed hold location report, ensuring that all necessary content is accounted for and that legal obligations are met.
As we conclude this exploration of the modernized Microsoft Purview eDiscovery (preview) experience, it’s clear that the transformative enhancements are set to redefine the landscape of legal compliance and security investigations. The new experience, with its intuitive design and comprehensive set of new capabilities, streamlines the eDiscovery process, making it more efficient and accessible than ever before. The new eDiscovery experience is currently in public preview and is expected to be Generally Available by the end of 2024.
Thank you for joining us on this journey through the latest advancements in eDiscovery. We are excited to see how these changes will empower legal and compliance teams to achieve new levels of efficiency and effectiveness in their important work. To learn more about the changes in eDiscovery, visit our product documentation. As always, we are eager to hear your feedback and continue innovating to improve your experience. We welcome your thoughts via the Microsoft Purview portal’s feedback button.
We hope these enhancements improve your day-to-day experience and ultimately streamline the eDiscovery process, making it more efficient and accessible than ever before. The new eDiscovery experience is currently in public preview and is expected to be Generally Available by the end of 2024.
Learn more
We are excited to see how these changes will empower legal and compliance teams to achieve new levels of efficiency and effectiveness in their important work. Check out our interactive guide at https://aka.ms/eDiscoverynewUX to better understand the changes in eDiscovery. As always, we are eager to hear your feedback and continue innovating to improve your experience. We welcome your thoughts via the Microsoft Purview portal’s feedback button.
To learn more about eDiscovery, visit our Microsoft documentation at http://aka.ms/eDiscoveryPremium, or our “Become an eDiscovery Ninja” page at https://aka.ms/ediscoveryninja. If you have yet to try Microsoft Purview solutions, we are happy to share that there is an easy way for eligible customers to begin a free trial within the Microsoft Purview compliance portal. By enabling the trial in the compliance portal, you can quickly start using all capabilities of Microsoft Purview, including Insider Risk Management, Records Management, Audit, eDiscovery, Communication Compliance, Information Protection, Data Lifecycle Management, Data Loss Prevention, and Compliance Manager.
This article is contributed. See the original author and article here.
In this article, we are going to provide detailed steps to create a scheduled Azure SQL Database backup to storage account using automation. This is a useful technique for maintaining regular backups of your database and storing them in a secure and accessible location. You will get an actual backup of Azure SQL Database stored in a storage account in .bacpac format, which you can restore or migrate as needed. The automation process involves creating an automation account that triggers a PowerShell script through a runbook to run the backup command and save the output to a blob container.
Prerequisites
Azure Storage account: The storage account is needed to host the database backups. You are required to set up a container within this account to store the backups.
Azure SQL Database and Server: This is the database you will back up, along with its hosting server. Ensure that you grant Azure services permission to access the server where this database is hosted by selecting “Allow Azure services and resources to access this server” within the networking section, as illustrated in the following documentation: Network Access Controls – Azure SQL Database & Azure Synapse Analytics | Microsoft Learn
Azure Automation account and PowerShell Workflow runbook: These components are utilized to set up automatic backups and their scheduling. make sure to using a PowerShell version 7.2 or above when creating the runbook. To learn more about setting up a PowerShell Workflow runbook in Azure Automation, please check the following documentation: Tutorial – Create a PowerShell Workflow runbook in Azure Automation | Microsoft Learn
Setup
Once all the necessary prerequisites are in place, you should navigate to the runbook and click on “Edit.” Then choose the “Edit in portal” option as illustrated below:
The editor interface for the runbook will launch, displaying the runbook you created earlier. Enter the following code into the editor area:
# Connect to Azure with system-assigned managed identity
Connect-AzAccount -Identity
# set and store context
$AzureContext = Set-AzContext –SubscriptionId "*****"
# Resource group name
$resourceGroup = "*****"
# Storage account name that will have the backups
$storageAccountName = "*****"
# Storage account access key that will have the backups
$storageKey = "*****”
# Container name that will have the backups
$containerName = "*****"
# storage blob uri with the datetime
$storageUri = "https:// *****.blob.core.windows.net/*****/db-$(Get-Date -UFormat "%Y-%m-%d_%H-%m-%S").bacpac"
#Storage access key type
$storageKeyType = "StorageAccessKey"
#SQL server name
$server_name ="*****"
#Database name to be exported
$SQL_db = "*****"
#SQL Auth Username
$SQL_username = "*****"
#SQL Auth Password
$SQL_secure_secret = ConvertTo-SecureString -String "*****" -AsPlainText -Force
# Run the Export job with the required parameters
New-AzSqlDatabaseExport -ResourceGroupName $resourceGroup -ServerName $server_name -DatabaseName $SQL_db -StorageKeyType $storageKeyType -StorageKey $storageKey -StorageUri $storageUri -AdministratorLogin $SQL_username -AdministratorLoginPassword $SQL_secure_secret
Ensure you fill in all the ***** with the appropriate values for the step you’ve designed, then press the “Save” button located at the top left corner. The parameters given relate to details on subscription and resource group, as well as information on the SQL database, server, and storage account. You’ll find explanations for each parameter in the code snippet provided above.
If you have addition security requirement, you can store the secrets used in the script within an Azure Key Vault and retrieve them as needed within the PowerShell script. This approach ensures that sensitive information is securely managed and reduces the risk of exposure.
Select the test button on the toolbar, which will display the test pane allowing you to test the script prior to scheduling. Hit the start button and monitor the requests; if the run is successful, you should observe an outcome like this:
You can monitor the progress of backups and access the export history through the SQL server in the “Import/Export history” section, as illustrated below:
After the backup finishes, it will appear in the storage account with the datetime suffix provided by the script:
After verifying the script operates correctly, you may go ahead with publishing the runbook and link it to a schedule based on your requirements according to the following documentation:
You can initiate the automation by pressing the start button, schedule it to run automatically, or set a webhook to trigger the process. Remember that you can only run one export job at a time. If you attempt to run two jobs simultaneously, one will not succeed and you’ll receive an error message stating that another export job is already in progress.
Recommendations
Establishing a routine backup schedule for a large database over an undefined timeframe can lead to substantial storage consumption and potentially significant costs. It’s important to regularly monitor your storage account and remove unnecessary backups, or consider relocating them to more cost-effective storage tiers, such as cold or archive.
It might be beneficial to consider implementing a storage lifecycle management policy to manage data within the storage account and decrease the costs associated with storing database backups. Lifecycle management can help you in creating an automated schedule to delete blobs or transition blobs to a different tier that is less expensive like cold or archive tiers based on creation date. For additional details on storage lifecycle management and instructions for configuration, please consult the provided documentation:
If you have soft delete enabled on the storage account, make sure that the retention period is set appropriately to avoid incurring additional charges for retaining soft-deleted data over an extended period.
This article is contributed. See the original author and article here.
The article discusses a problem where numerous messages end up in the dead letter queue (DLQ) when the JMS service bus consumer connects to the Azure Service Bus using Qpid jars. The reason for the messages being dead-lettered is that they have reached the maximum delivery count.
The fundamental issue stems from Apache Qpid’s message handling. Qpid utilizes a local buffer to prefetch messages from the Azure Service Bus, storing them prior to delivery to the consumer. The complication occurs when Qpid prefetches an excessive number of messages that the consumer is unable to process within the lock duration. Consequently, the consumer is unable to acknowledge or finalize the processing of these messages before the lock expires, leading to an accumulation of messages in the Dead Letter Queue (DLQ).
To address this problem, it is crucial to either turn off Qpid’s local buffer or modify the prefetch count. Disabling prefetching is achievable by setting jms.prefetchPolicy.all=0 in the JMS client. This configuration allows the JMS client to directly consume messages from the Azure Service Bus, circumventing Qpid’s local buffer. Consequently, the consumer can process messages at a suitable pace, guaranteeing smooth processing and issue-free completion.
This article is contributed. See the original author and article here.
The modern rich text editor is now our advanced editor for an end-to-end enhanced authoring experience. As part of this advancement, we are phasing out the current rich text editor and integrating its capabilities into the modern rich text editor.
Key dates
Disclosure date: August 9, 2024 The modern rich text editor for non-customized controls was delivered in April. By October, the modern rich text editor for customized controls will also be delivered.
End of support: October 31, 2024 After this date, no new enhancements will be done to the current rich text editor and the modern rich text editor will be generally available.
End of life: April 30, 2025 After this date, the current rich text editor will be taken out of service.
Next Steps
We strongly encourage customers to leverage the modern rich text editor, which will be enriched with all the editor experiences. The modern rich text editor is designed to align with the familiar and intuitive interfaces of Microsoft applications such as Outlook, Word, and OneNote. This update introduces a modern design, dark mode and new Copilot features to enhance your text editing capabilities. This ensures reliability and alignment with our commitment to cost-efficiency and user-centric innovation. Learn more about the modern rich text editor.
Please contact your Success Manager, FastTrack representative, or Microsoft Support if you have any additional questions.
“Teams meetings have evolved significantly over the past few years, with the end of live Team events, the introduction of Town Halls, and the strengthening of Teams Premium features. It’s not always easy to understand what is and isn’t included in Teams Premium licences, or to explain the benefits of purchasing this new plan. This documentation and its comparison tables make my job a lot easier today.”
“Using Azure Pipelines for CI/CD in a closed network environment requires the use of self-hosted agents, and managing these images was a very labor-intensive task. Even with automation, updates took 5-6 hours and had to be done once or twice a month. It was probably a challenge for everyone.
In this context, the announcement of the Managed DevOps Pools on this blog was very welcome news. It’s not just me; it’s likely the solution everyone was hoping for, and I am very much looking forward to it.”
(In Japanese: Azure Pipelinesを使って閉域環境でのCI/CDはセルフホストエージェントを使わなければならない上に、イメージの管理は非常に大変な作業でした。更新作業には自動化していても5-6時間かかる上に、月に1-2度は行わなくてはなりません。おそらく皆さん大変だったでしょう。
This article is contributed. See the original author and article here.
Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure.
The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.
Azure Managed Prometheus recently announced general availability of Operator and CRD support, which will enable customers to customize metrics collection and add scraping of metrics from workloads and applications using Service and Pod Monitors, similar to the OSS Prometheus Operator.
This blog will demonstrate how we leveraged the CRD/Operator support in Azure Managed Prometheus and used the Nvidia DCGM Exporter and Grafana to enable GPU monitoring.
GPU monitoring
As the use of GPUs has skyrocketed for deploying large language models (LLMs) for both inference and fine-tuning, monitoring these resources becomes critical to ensure optimal performance and utilization. Prometheus, an open-source monitoring and alerting toolkit, coupled with Grafana, a powerful dashboarding and visualization tool, provides an excellent solution for collecting, visualizing, and acting on these metrics.
Essential metrics such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies serve as fundamental indicators of GPU consumption, offering invaluable insights into the performance and efficiency of graphics processing units, and thereby enabling us to reduce our COGs and improve operations.
Using Nvidia’s DGCM Exporter with Azure Managed Prometheus
The DGCM Exporter is a tool developed by Nvidia to collect and export GPU metrics. It runs as a pod on Kubernetes clusters and gathers various metrics from Nvidia GPUs, such as utilization, memory usage, temperature, and power consumption. These metrics are crucial for monitoring and managing the performance of GPUs.
You can integrate this exporter with Azure Managed Prometheus. The section below in blog describes the steps and changes needed to deploy the DCGM Exporter successfully.
Prerequisites
Before we jump straight to the installation, ensure your AKS cluster meets the following requirements:
GPU Node Pool: Add a node pool with the required VM SKU that includes GPU support.
EnableAzure Managed Prometheus and Azure Managed Grafana on your AKS cluster.
Refactoring Nvidia DCGM Exporter for AKS: Code Changes and Deployment Guide
Updating API Versions and Configurations for Seamless Integration
As per the official documentation, the best way to get started with DGCM Exporter is to install it using Helm. When installing over AKS with Managed Prometheus, you might encounter the below error:
Error: Installation Failed: Unable to build Kubernetes objects from release manifest: resource mapping not found for name: "dcgm-exporter-xxxxx" namespace: "default" from "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1". Ensure CRDs are installed first.
To resolve this, follow these steps to make necessary changes in the DCGM code:
Clone the Project: Go to the GitHub repository of the DCGM Exporter and clone the project or download it to your local machine.
Navigate to the Template Folder: The code used to deploy the DCGM Exporter is located in the template folder within the deployment folder.
Modify the service-monitor.yaml File: Find the file service-monitor.yaml. The apiVersion key in this file needs to be updated from monitoring.coreos.com/v1 to azmonitoring.coreos.com/v1. This change allows the DCGM Exporter to use the Azure managed Prometheus CRD.
apiVersion: azmonitoring.coreos.com/v1
4. Handle Node Selectors and Tolerations: GPU node pools often have tolerations and node selector tags. Modify the values.yaml file in the deployment folder to handle these configurations:
Helm: Packaging, Pushing, and Installation on Azure Container Registry
We followed the MS Learn documentationfor pushing and installing the package through Helm on Azure Container Registry. For a comprehensive understanding, you can refer to the documentation. Here are the quick steps for installation:
After making all the necessary changes in the deployment folder on the source code, be on that directory to package the code. Log in to your registry to proceed further.
1. Package the Helm chart and login to your container registry:
3. Verify that the package has been pushed to the registry on Azure portal.
4. Install the chart and verify the installation:
helm install dcgm-nvidia oci:///helm/dcgm-exporter -n gpu-resources
#Check the installation on your AKS cluster by running:
helm list -n gpu-resources
#Verify the DGCM Exporter:
Kubectl get po -n gpu-resources
Kubectl get ds -n gpu-resources
You can now check that the DGCM Exporter is running on the GPU nodes as a DaemonSet.
Exporting GPU Metrics and Configuring Azure Managed Grafana Dashboard
Once the DGCM Exporter DaemonSet is running across all GPU node pools, you need to export the GPU metrics generated by this workload to Azure Managed Prometheus. This is accomplished by deploying a PodMonitor resource. Follow these steps:
Deploy the PodMonitor: Apply the following YAML configuration to deploy the PodMonitor:
2. Check if the PodMonitor is deployed and running by executing:
kubectl get podmonitor -n
3. Verify Metrics export: Ensure that the metrics are being exported to Azure Managed Prometheus on the portal by navigating to the “Metrics” page on your Azure Monitor Workspace.
Create the DGCM Dashboard on Azure Managed Grafana
This article is contributed. See the original author and article here.
La computación en la nube está revolucionando diversas áreas de la tecnología, incluyendo programación, datos, inteligencia artificial (IA) y seguridad. Para ayudar a los profesionales a especializarse en esta área en constante evolución, Microsoft está lanzando la iniciativa Azure Infra Girls. Este programa ofrece una serie de cuatro clases en vivo, gratuitas y en español, que se llevarán a cabo del 3 al 24 de septiembre, a las 12:30pm (GMT-6, Ciudad de México).
Durante estas sesiones, los participantes tendrán la oportunidad de profundizar sus conocimientos en computación en la nube y prepararse para la certificación AZ-900 (Azure Fundamentals), a través de una ruta de aprendizaje con cursos certificados en Microsoft Learn.
Todas nuestras sesiones comenzaran en base a la zona horaria de Ciudad de México.
Sesión
Descripción
Conceptos de computaciónen la nubeconMicrosoft Azure
Sept 3, 12:30PM Mexico City (GMT-6)
En el primer episodio de Azure Infra Girls, aprenderás sobre los conceptos básicos de la programación en la nube.Comprenderás qué son las nubes públicas, privadas e híbridas, los beneficios y los tipos de servicios, como IaaS, PaaS, SaaS, Serverless, y cómo usar Azure para desarrollar tus aplicaciones.
Arquitectura y servicios de Azure
Sept 11, 12:30PM Mexico City (GMT-6)
En la segunda sesión, profundizaremos en los conceptos de arquitectura y servicios de Azure, con algunos ejercicios prácticos para crear un Escritorio Virtual (Azure Virtual Desktop) y alojar un recurso en Azure.
Administración y gobernanza de Azure
Sept 17, 12:30PM Mexico City (GMT-6)
En la tercerasesión, abordaremos los servicios de administración y gobernanza de Azure, analizando el control de costos, las funcionalidades y herramientas de gobernanza, el cumplimiento y la supervisión.
Simulacion del examen Azure Fundamentals AZ-900
Sept 24, 12:30PM Mexico City (GMT-6)
En esta sesión, realizaremos un simulacro del examen con preguntas relacionadas con los temas. Estarán relacionados, sobre conceptos de nube, arquitectura y servicios de Azure. Asimismo, con administración y gobernanza.
Para aquellos que desean profundizar en la computación en la nube y prepararse para la certificación AZ-900 Azure Fundamentals, tenemos una ruta de aprendizaje completa disponible en Microsoft Learn. Esta ruta aborda las principales áreas de la certificación, permitiéndote estudiar de forma gratuita y a tu propio ritmo. Además, al completar los cursos, podrás obtener certificados que pueden ser añadidos a tu perfil de LinkedIn, destacando tus nuevas habilidades y conocimientos.
Los módulos de Microsoft Learn para la certificación AZ-900 incluyen:
Evaluación de sus conocimientos: estas evaluaciones le proporcionarán una infomacion general del estilo, la redacción y la dificultad de las preguntas que probablemente verá en el examen. A través de estas valoraciones, puede evaluar su preparación, determinar dónde necesita preparación adicional y llenar los vacíos de conocimiento para aumentar la probabilidad de aprobar el examen.
Demostración de experiencia: aquí puede experimentar con el aspecto del examen antes de realizarlo. Podrá interactuar con diferentes tipos de preguntas en la misma interfaz de usuario que usará durante el examen.
Regístrate y Participa
No te pierdas esta increíble oportunidad para seguir aprendiendo y avanzar en tu carrera tecnológica. Únete a Azure Infra Girls y comienza tu viaje hacia la especialización en computación en la nube con Microsoft. Esperamos verte en nuestras sesiones en vivo y ayudarte a alcanzar tus objetivos profesionales en el mundo de la tecnología.
Recent Comments