This article is contributed. See the original author and article here.
At Microsoft, trust is the foundation of everything we do. As more organizations adopt Copilot in Dynamics 365 and Power Platform, we are committed to helping everyone use AI responsibly. We do this by ensuring our AI products deliver the highest levels of security, compliance, and privacy in accordance with our Responsible AI Standard—our framework for the safe deployment of AI technologies.
Take a moment to review the latest steps we are taking to help your organization securely deploy Copilot guided by our principles of safety, security, and trust.
Copilot architecture and responsible AI principles in action
Let’s start with an overview of how Copilot works, how it keeps your business data secure and adheres to privacy requirements, and how it uses generative AI responsibly.
First, Copilot receives a prompt from a user within Dynamics 365 or Power Platform. This prompt could be in the form of a question that the user types into a chat pane, or an action, such as selecting a button labeled “Create an email.”
Copilot processes the prompt using an approach called grounding, which might include retrieving data from Microsoft Dataverse, Microsoft Graph, or external sources. Grounding improves the relevance of the prompt, so the user gets responses that are more appropriate to their task. Interactions with Copilot are specific to each user. This means that Copilot can only access data that the current user has permissions to.
Copilot uses Azure OpenAI Service to access powerful generative AI models that understand natural language inputs and returns a response to the user in the appropriate form. For example, a response might be in the form of a chat message, an email, or a chart. Users should always review the response before taking any action.
How Copilot uses your proprietary business data
Responses are grounded in your business content and business data. Copilot has real-time access to both your content and context to generate answers that are precise, relevant, and anchored in your business data for accuracy and specificity. This real-time access goes through our Dataverse platform (which includes all Power Platform connectors), honoring the data loss prevention and other security policies put in place by your organization. We follow the pattern of Retrieval Augmentation Generation (RAG), which augments the capabilities of language models by adding dynamic grounding data to the prompt that we send to the model. Our system dynamically looks up the relevant data schema using our own embedding indexes and then uses the language models to help translate the user’s question into a query that we can run against the system of record.
We do not use your data to train language models. We believe that our customers’ data is their data in accordance with Microsoft’s data privacy policy. AI-powered language models are trained on a large but limited corpus of data—but prompts, responses, and data accessed through Microsoft Graph and Microsoft services are not used to train Copilot for Dynamics 365 or Power Platform capabilities for use by other customers. Furthermore, the models are not improved through your usage. This means that your data is accessible only by authorized users within your organization unless you explicitly consent to other access or use.
How Copilot protects business information and data
Enterprise-grade AI, powered by Azure OpenAI Service. Copilot is powered by the trusted and compliant Azure OpenAI Service, which provides robust, enterprise-grade security features. These features include content filtering to identify and block output of harmful content and protect against prompt injections (jailbreak attacks), which are user prompts that provoke the generative AI model into behaving in ways it was trained not to. Azure AI services are designed to enhance data governance and privacy and adhere to Microsoft’s strict data protection and privacy standards. Azure OpenAI also supports enterprise features like Azure Policy and AI-based security recommendations by Microsoft Defender for Cloud, meeting compliance requirements with customer-managed data encryption keys and robust governance features.
Built on Microsoft’s comprehensive approach to security, privacy, and compliance. Copilot is integrated into Microsoft Dynamics 365 and Power Platform. It automatically inherits all your company’s valuable security, compliance, and privacy policies and processes. Copilot is hosted within Microsoft Cloud Trust Boundary and adheres to comprehensive, industry-leading compliance, security, and privacy practices. Our handling of Copilot data mirrors our treatment of other customer data, giving you complete autonomy in deciding whether to retain data and determining the specific data elements you wish to keep.
Safeguarded by multiple forms of protection. Customer data is protected by several technologies and processes, including various forms of encryption. Service-side technologies encrypt organizational content at rest and in transit for robust security. Connections are safeguarded with Transport Layer Security (TLS), and data transfers between Dynamics 365, Power Platform, and Azure OpenAI occur over the Microsoft backbone network, ensuring both reliability and safety. Copilot uses industry-standard secure transport protocols when data moves over a network—between user devices and Microsoft datacenters or within the datacenters themselves.
Watch this presentation by James Oleinik for a closer look at how Copilot allows users to securely interact with business data within their context, helping to ensure data remains protected inside the Microsoft Cloud Trust Boundary. You’ll also learn about measures we take to ensure that Copilot is safe for your employees and your data, such as how Copilot isolates business data from the language model so as not to retrain the AI model.
Architected to protect tenant, group, and individual data. We know that data leakage is a concern for customers. Microsoft AI models are not trained on and don’t learn from your tenant data or your prompts unless your tenant admin has opted in to sharing data with us. Within your environment, you can control access through permissions that you set up. Authentication and authorization mechanisms segregate requests to the shared model among tenants. Copilot utilizes data that only you can access. Your data is not available to others.
Committed to building AI responsibly
As your organization explores Copilot for Dynamics 365 and Power Platform, we are committed to delivering the highest levels of security, privacy, compliance, and regulatory commitments, helping you transform into an AI-powered business with confidence.
This article is contributed. See the original author and article here.
The timeline is a crucial tool for users to monitor customer engagements, track activities, and stay updated on record progress. With Generative AI, we’re introducing timeline highlights, enabling users to grasp activity details in milliseconds.
Streamlined timeline highlights revolutionize the way users interact with essential activities such as emails, notes, appointments, tasks, phone calls, and conversations. With a single click, agents gain access to summaries of key events, including records like cases, accounts, contacts, leads, opportunities, and customized entities.
Agents save time with timeline highlights
This new feature optimizes agent productivity, eliminating the need for excessive clicks and extra reading. Agents can efficiently absorb crucial information, enabling faster and more transparent interactions with customers. Users can expand the highlights section in the timeline by clicking on the chevron.
The highlights show relevant items in a clear and concise bulleted format, facilitating quick analysis and easy reference. The copy functionality empowers users to reuse content by pasting it into notes, with the flexibility to make modifications as needed.
In summary, our innovative approach to timelines, driven by generative AI technology, offers users a transformative experience. Consequently, agents can effortlessly track customer engagements and monitor progress with unparalleled speed and accuracy.
The timeline highlights feature is available within the apps like Dynamics 365 Customer Service, Dynamics 365 Sales, Dynamics 365 Marketing, Dynamics Field Service and custom model-driven Power Apps, providing a unified experience across Dynamics 365.
Timeline highlights are enabled by default. You can enable and disable timeline highlights at the app level and also at form level via the maker portal make.powerapps.com
This article is contributed. See the original author and article here.
Healthcare and Life Sciences (HLS) is a demanding and complex field that requires constant innovation, collaboration, and communication. HLS professionals often have to deal with large amounts of data, information, and documentation, which can be overwhelming and time-consuming. Moreover, the COVID-19 pandemic has added more pressure and stress to the already challenging work environment, leading to increased risks of burnout and mental fatigue.
How can HLS professionals cope with these challenges and improve their productivity and well-being? One possible solution is to leverage the power of AI and use Copilot, a tool that helps you write better code faster. Copilot is a smart assistant that can assist with email overload, summarize information from various sources, generate documentation, and more. Copilot also integrates with other applications like Teams, Word, Outlook, and more, creating a seamless workflow that can enhance your efficiency and creativity.
Check out the ever-growing repository of Use case workflow leveraging the power of Copilot.
* Note: all examples are demonstrations for educational purposes only and not intended to be used as production. No warranty or support is stated or implied.
This article is contributed. See the original author and article here.
Last updated 4/3/2024 to include v2 Tiers features.
Authors: Faisal Mustafa, Ben Gimblett, Jose Moreno, Srini Padala, and Fernando Mejia.
There are different options when it comes to integrating your API Management with your Azure Virtual Network (VNet) which are important to understand. These options will depend on your network perimeter access requirements and the available tiers and features in Azure API Management.
This blog post aims to guide you through the different options available on both the classic tiers and v2 tiers of Azure API Management, to help you decide which choice works best for your requirements.
TL; DR
Decision tree describing how to choose the right Azure API Management tier based on networking scenarios.
Here is the relevant documentation to implement these tiers and features:
Before we jump into the options and differences it’s worth taking a step back to understand more about how Azure Platform as a Service products (PaaS) work regarding networking. If you need a refresher and so we don’t repeat ourselves here, we’d ask the reader to spend a few minutes over at Jose’s excellent “Cloudtrooper blog” and his deep dive post on all things “PaaS networking”. We’ll use some of the same labels and terms in this post for consistency Taxonomy of Azure PaaS service access – Cloudtrooper.
What is API Management, what tiers are available and why does it matter in relation to networking?
The first thing to remember is that the API Management API Gateway is a Layer 7 (in OSI model terms) HTTP Proxy. Keeping this in mind helps a lot when you think about the networking options available through the different tiers. In simple terms:
An HTTP proxy terminates HTTP connections from any client going to a set of [backend] servers and establishes new HTTP connections to those servers. For most API Management Gateway use cases the resource would reside close to the [backend] servers its facades (usually in the same Azure region).
Diagram describing all the components included in Azure API Management, and the difference between inbound and outbound sections.
Why does this matter? When we talk about the available networking-options we talk about features which relate to the initial client connection to API Management(inbound) OR features relating to the connection from API Management to the API backends(outbound). From now on, we will call them inbound and outbound connections, and there are different options/features for each type.
Regarding Azure API Management tiers we will rely in the following categories:
Consumption tier, the tier that exposes serverless properties.
Classic tiers, this category refers to the Developer, Basic, Standard and Premium tiers.
V2 tiers, this category refers to the Basic v2 and Standard v2.
Networking scenarios
Let’s jump right in. To make it easier to navigate and for you to get the information you need to make the right decisions for your use case, lets summarize by the applicable use-cases, we’ll list the tiers where the functionality is available and add any applicable notes.
I have no specific networking requirements and just want to keep things simple.
Supported tiers: Consumption, Classic tiers, and V2 tiers.
Of course, there’s more to implementing a workload with API Management than just networking features and still a lot of choice when it comes to an API Management tier that fits your workload and scale requirements. But if you are ok with having inbound and outbound connections going through the Azure Backbone or Public Internet, any tier of Azure API Management can help you with this scenario. Of course, we recommend securing your endpoints using Authentication/Authorization mechanisms like subscription keys, certificates, and Oauth2/OIDC.
Diagram describing what tiers of Azure API Management allow public access for inbound and outbound.
I have a requirement to connect privately to API Management for one or more of my Client Applications
Option 1: Consider deploying a Private Endpoint into API Management.
Supported tiers: Classic tiers.
“Private endpoints allow you to access services privately within your virtual network, avoiding exposure over the public internet.” (Thanks Microsoft co-pilot).
Deploying a Private Endpoint for inbound connectivity is a good option to support secure client connections into API Management. Remember, in this context the Private Endpoint you deploy for API Management creates an alternative network path into your API MANAGEMENT service instance; it’s about facilitating inbound communication (the client connecting to API Management), and it is “one way only” meaning it doesn’t help for scenarios where you also want to connect privately to your backends.
Diagram describing what tiers of Azure API Management allow public access and private endpoint for inbound.
Note: Whilst it’s supported to use Private Endpoints in the Premium or Developer tiers – the service must not have been added to a Virtual Network (VNET). This makes private endpoints and the “VNET Injection” capability supported by Premium and Developer mutually exclusive. The Basic and Standard tiers can’t be added to a Virtual Network.
Option 2: Consider adding your API Management to your VNet.
Supported tiers: Developer tier and Premium tier.
Developer and Premium are the only tiers where you can deploy the service into your virtual network – what we sometimes refer to as “VNet injection” – and allows you to set the inbound as public or private.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
As far as the Developer Tier is concerned, this is NOT meant for production use and no workloads showed be deployed on this Developer Tier.
The API backends I want to reach from API Management are private.
Supported tiers: Developer tier, Premium tier, and Standard v2 tier.
For the classic tiers Developer and Premium, as we mentioned before you can deploy the service into your virtual network. This could be in “internal” (private) or external mode.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
For the v2 tiers, Standard v2, allows you to rely on a feature called “VNet Integration” (please not the difference between VNet Integration and VNet injection) which allows API Management to “see” into your network and access services through a private IP in the connected Virtual Network, or in peered / connected network(s).
Diagram describing what tiers of Azure API Management allow private access for outbound using VNet integration.
I need to connect to API Management privately as well as reach private backends with no public access.
Supported tiers: Developer tier and Premium tier.
Add API Management Premium or Developer to your Virtual Network. The best practice would be to set the mode to “internal” – meaning inbound connectivity is via a private IP, via an internal load balancer.
Diagram describing what tiers of Azure API Management allow private access for both inbound and outbound.
It’s something that relates to networking and is oft asked, so it would be remiss of us not to add a few lines summarizing the behavior and NAT (network address translation) for the different deployment modes:
Inbound
By default, inbound is via a public IP address assigned to the service.
This changes if you opt into using a Private Endpoint (for any of the tiers supporting this feature). Always remember to explicitly turn off the public ingress if you deploy a private endpoint instead and no longer require it.
This also changes if you deploy the Premium (or Developer) tier, added to your Virtual Network (“VNet-injected”), and set the mode to “internal”. In this mode, although a public IP is still deployed for control plane traffic, all data plane traffic will go through the private IP (Internal load balancer endpoint) which is provided in this mode.
Outbound
By default, outbound traffic is via the public IP(s) assigned to the service.
This changes for Premium (or Developer) tiers when the service is added to your Virtual Network (irrespective of the inbound mode being public or private). In this case
For internal traffic leaving API Management – for example API Management reaching to a backend service hosted on a downstream VM within the network – there is no SNAT (source network address translation). If the next hop is an NVA (network virtual appliance/firewall) any rules for the source should use the subnet prefix, not individually mapped IPs.
External bound traffic breaking out (non RFC1918) is SNAT to the single-tenant (dedicated) public IP assigned. This includes traffic to other PaaS services via a public endpoint (although you should note the traffic in this instance stays on the Azure backbone).
For Standard V2 using “VNet integration”
NAT for internal traffic is the same.
External bound traffic breaking out is SNAT to one of the hosting stamps shared public IPs. If you want control over the outbound IP, use an Azure NAT Gateway on the subnet or route via an NVA (network virtual appliance, or firewall). Same note as above applies for PaaS-to-PaaS traffic via Public IPs.
Note for v2 tiers: API MANAGEMENT control plane and its own dependency traffic is not seen on the customers network, which is an enhancement from the GA tiers, and simplifies scenarios requiring force-tunnelling / firewall integration.
This article is contributed. See the original author and article here.
Microsoft is excited to participate in Embedded World, April 9-11, 2024 in Nuremberg, Germany, the leading international fair for embedded systems. This event showcases the latest innovations and trends in the field. From hardware and software to tools and services, this event brings together experts and industry leaders from around the globe to share their knowledge and insights. In this blog, we will give you a sneak peek at Microsoft’s presence at the event, including highlights on key topics, and what to expect from Embedded World.
Makers of embedded devices will be interested in Azure Private MEC (Multi-access Edge Compute) for several reasons:
Low Latency and Edge Compute: Azure private MEC provides low-latency and high-bandwidth connectivity combined with highly available edge computing services, for IoT and other edge devices, enabling real-time data processing and decision-making.
Scalability: Azure private MEC allows makers of embedded devices to easily scale their solutions, adding new devices and capabilities as needed.
Security: Azure private MEC provides a secure and isolated environment for processing sensitive data, reducing the risk of data breaches and cyber-attacks.
Device Density: Azure private MEC can support a high density of devices on a manufacturing floor, allowing for seamless connectivity and communication between numerous devices.
Connect with us in Hall 5 Stand 353
Visit us at our booth in Hall 5 to explore our innovative demos and experiences, connect with product and partner experts on featured products, and meet one-on-one with Microsoft leaders.
Microsoft invites attendees to join the following sessions where they will discover more about Azure private MEC and how private 5G is unlocking industry transformation. This is a great opportunity to get practical guidance on how to prepare your organization for innovation and growth, including learning on tools and frameworks to determine when, where, and how to focus on specific use cases emerging across industries today.
Understanding the ROI of a 5G-enabled factory April 9 | 12:00 pm
Intelligent factories leverage advanced technologies and automation to optimize manufacturing processes and enhance overall efficiency. These factories integrate various technologies such as artificial intelligence (AI), machine learning, Internet of Things (IoT), robotics, and data analytics to create a connected and intelligent production environment.
Hand on Lab with Azure Private 5G Core April 10 | 4:00 pm
April 11 | 1:30 pm
Register for a deep dive session in our Microsoft Learning Center, located across from the Microsoft booth (#5-469) where we will explore:
Azure private MEC solutions and their use in different industry verticals.
Why build applications and devices supporting the private MEC platforms.
Opportunity for growth with 5G and private MEC
Learn more about Azure private MEC
Whether you’re an enterprise , looking to leverage our solutions, a developer eager to create network-aware applications, or an application ISV seeking to collaborate with Microsoft on MEC solutions, we invite you complete this form to get contacted by a member of the private MEC team.
To learn how Microsoft is helping organizations embrace 5G with modern connected applications, sign up for news and updates delivered to your inbox.
This article is contributed. See the original author and article here.
We are announcing two important updates for users of Copilot for Microsoft 365. First, we are bringing priority access to the GPT-4 Turbo model to work with both web and work data. Second, later this month we are bringing expanded image generation capabilities in Microsoft Designer.
This article is contributed. See the original author and article here.
In the ever-evolving landscape of generative AI, a copilot isn’t just a companion that makes tasks that you’re already doing at work easier, but it’s quickly becoming a transformative force reshaping the very core of how things are done.
After shipping 13 publicly available Microsoft Copilot features in Microsoft Dynamics 365 Customer Insights that enable marketing teams to ask Copilot questions about their data, receive ideas for content from key points they want to make, define audiences and journeys in everyday words, understand their data quality, or get a summarized answer from multiple sources on how to use a feature within Dynamics 365 Customer Insights (to name a few), we realized that Copilot can help our marketing teams not just with tasks, but can completely change their workflow.
This workflow in Dynamics 365 Customer Insights will enable marketers to deliver unparalleled customer experiences (CXs). This could encompass a myriad of scenarios, from a marketing campaign for a new product or promotion, to managing how companies interact with their customers in key moments at scale.
The new Copilot in Dynamics 365 Customer Insights
The new Copilot in Customer Insights in now available in preview.
Take for example a theme park—as customers scan their tickets to ride attractions, they can receive personalized messages and notifications across their preferred channels be it: in-app, web, text message, or email. This not only enhances customer experience by making the most out of their visit but also enables the company to maximize customer spending.
Companies often spend a tremendous amount of time building these campaigns and experiences. With the growing demand for digital and mobile experiences, coupled with the need to use customer data from an expanding array of disconnected systems to personalize and trigger the experiences, the time and cost to deliver continues to rise.
Our conversations with numerous companies illuminated a common pain point—the process to deliver a CX project or campaign can extend from 12 to 15 weeks (about three and a half months). There are no signs this will decrease in length.
It’s not surprising why. The current marketing workflow resembles a complex team sport, requiring multiple contributors focused on different aspects—audience, journey, content, and the delivery channel.
Most teams create campaigns starting with a blank canvas; assembling, testing, and going live; then tracking the analytics to ensure everything is going well and any drop-off points are addressed. The complexity and coordination across people affect more than the time to market, it affects the quality and level of personalization delivered to the end customer.
Copilot in Dynamics 365 Customer Insights will completely change the game.
“At Campari Group, we’re excited by the opportunities these new Copilot capabilities will bring us to streamline our digital marketing processes—driving collaboration and enhancing our ability to deliver a truly engaging consumer experience, whilst staying one step ahead of the competition!”
David Hand– Global IT Manager, CRM, Campari
Companies have data that ground Copilot in brand styles, tone, language, and imagery. Briefs contain a lot of key information about the intent, success, branding, and key points. Over time, data from prior projects or campaigns will drive the continuous improvement of business results.
Armed with this data, what if you could describe the outcomes you want, and Copilot in Dynamics 365 Customer Insights could take the lead?
Our vision has four key areas of innovation:
We’re removing complex UIs that are difficult to navigate and use, instead Copilot will provide dynamic user experiences (UX) that come to you based on what you are trying to achieve. Examples of this are the dynamic project board and analytics.
Instead of marketers having to start from a word or pixel for every element of the project or campaign, Copilot provides data-driven suggestions that allows you to curate your project instead of having to create every piece. Everything starts connected and Copilot keeps it connected and in sync making any downstream changes saving the marketer time and eliminating a class of errors due to partial changes. Choose from suggested images, emails, audiences, journeys, and more instead of having to create each element from scratch. We don’t believe any of the suggestions will be 100% of what the marketing team needs, what we do know is that even if it’s only 60% the way there, it will save 60% of the time needed to launch the project.
We’re removing the constraint of resources limiting the quality of experience and depth of personalization. Instead of personalizing to four cohorts, Copilot will personalize to 64 cohorts or more. But then, you need confidence in what you’re delivering to customers. We are innovating in new UX and Copilot capabilities that will allow marketing teams to review and approve at Copilot scale where instead of having to review 100 variations across the journey, Copilot will tell you the key variations to review and tweak and make corrections on your behalf.
Today, after a project or campaign goes live, you need to assign a person to track the analytics and find where customers are dropping off. That person then needs to work with the right people to improve the results. Now, Copilot can track the analytics on behalf of the marketing team and proactively notify of optimizations with suggested options that allow the team to curate rather than create and deliver business success.
We’re partnering with Typeface to deliver this vision, and it plays a key role in enabling scaling on-brand multimodal content personalization in ways that simply were not possible in the past. Typeface understands your brand in depth and utilizes that understanding to deliver text and images that align to companies’ brand guidelines enabling one-on-one personalization with on-brand images at a scale that wasn’t viable in terms of time and cost before.
“Copilot holds the potential to be a real game-changer. Its ability to seamlessly align our business goals with community values has the potential to save us valuable time on internal processes. This efficiency translates to quicker iterations and more frequent connections with our audience. It’s not just shaping up to be a tool but a strategic advantage.”
Martin Nicholson–Digital Engagement Manager, Rare Ltd., Xbox – Sea of Thieves
This innovation is not years away, not months, not even weeks. Today, we’re happy to announce that our first public preview focusing on curation rather than creation is now available in Dynamics 365 Customer Insights in English, in all regions. Sign up to be one of the first to preview and experience the new capabilities.
We invite you to explore this transformative journey as Copilot takes the reins, opening a new era in marketing workflows. Join us in redefining how work gets done.
This article is contributed. See the original author and article here.
Microsoft has established itself as a leading solution for vulnerability risk management (VRM) by leveraging its industry-leading threat intelligence and security expertise. Microsoft Defender Vulnerability Management covers the end-to-end VRM lifecycle to identify, assess, prioritize, and remediate vulnerabilities across platforms and workloads. Making it an ideal tool for an expanded attack surface taking advantage of our context-aware, risk-based prioritization breach likelihood predictions and business contexts to prioritize vulnerabilities across their portfolio of managed and unmanaged devices.
Figure: Platform and workload coverage in Defender Vulnerability Management
We are excited to announce that on 1 April, 2024 all Microsoft Defender Vulnerability Management capabilities are available to government cloud customers.
Organizations across commercial, education and government environments can now get the complete set of capabilities for their environment. Defender Vulnerability Management has both core and premium capabilities where the core capabilities are included as part of Defender for Endpoint P2 and premium capabilities available as an add-on. For organizations that are not yet on Defender for Endpoint Plan 2 we also provide a standalone offer that includes both core and premium. Organizations looking for server protection for their hybrid cloud environment the vulnerability management core capabilities are available in Defender for Servers plan 1 and premium capabilities in Defender for Servers plan 2.
Figure: Availability of core and premium capabilities across offerings that include Defender Vulnerability Management for endpoints and servers.
More information about the Defender Vulnerability Management premium capabilities now available in GCC, GCC-High and DOD in these blogs:
This article is contributed. See the original author and article here.
Getting customer signoff on a completed Work Order is a key job to be done for any service technician. Earlier this year, Dynamics 365 Field Service released a new and improved signature control to capture customer signatures on mobile devices. It supports drawing a signature on the screen as well as typing a name in lieu of a drawn signature. This enhancement was a key customer request, and also helps the scenario to be more accessible for our users.
How it works
Technicians in the field can open the form that has been configured to capture the customer signature on the Field Service mobile app and hand over the device to the customer.
Drawing the signature on the screen
Technicians will then be able to choose how they want to enter the signature, either drawing it on the screen with their finger or stylus pen, or by typing it on the keyboard.
Typing the signature using keyboard
How to configure a form to add the control
If you use default forms provided by Microsoft, the existing signature controls have already been updated, and there’s no action needed to configure it.
For example, the Booking and Work Order Form in the Bookable Resource Booking entity, contains the Signature field in the Notes tab. By opening this form in Power Apps, you find the associated component Signature Control
Add the signature control to a custom form
Follow these steps to add the new control to a custom form.
Edit a custom form and add the Signature field (or your custom Multiline text field where you store the customer signature) on your form. Optionally, hide the label, so the control can use all the available space on screen.
Expand the Components section at the bottom of the field detail pane and add the Signature Control component.
If the Signature Control is not in the list, select Get more components.
Note: The new Signature Control replaces the Pen input control.
Select the Signature Control in the list of controls to add it.
Save and Publish your form to make it available to your technicians.
We hope that these new enhancements in Field Service mobile will make your work even more productive. We would love to hear your feedback and suggestions on how to improve the product. Please feel free to leave comments in the Dynamics 365 Community Forum or suggest features in the Ideas portal.
This article is contributed. See the original author and article here.
Effective cost management in Azure Monitor and Azure Log Analytics is essential for controlling cloud expenditures. It involves strategic measures to reduce costs while maximizing the value derived from ingested, processed, and retained data. In Azure, achieving this balance entails adopting efficient data ingestion methods, smart retention policies, and judicious use of table transformations with Kusto Query Language (KQL).
Understanding the impact of data management practices on costs is crucial since each byte of data ingested and stored in Azure Log Analytics incurs expenses. Table transformations—such as filtering, projecting, aggregating, sorting, joining, and dropping data—are a great way to reduce storage and ingestion costs. They allow you to filter or modify data before it’s sent to a Log Analytics workspace. Reducing ingestion cost and also reducing long-term storage.
This document will explore four key areas to uncover strategies for optimizing the Azure Monitor and Azure Log Analytics environment, ensuring cost-effectiveness while maintaining high performance and data integrity. Our guide will provide comprehensive insights for managing cloud expenses within Azure services.
Key Areas of Focus:
Ingestion Cost Considerations: The volume of data ingested primarily influences costs. Implementing filters at the source is crucial to capture only the most relevant data.
Data Retention Strategies: Effective retention policies are vital for cost control. Azure Log Analytics allows automatic purging of data past certain thresholds, preventing unnecessary storage expenses.
Optimization through Transformations: Refining the dataset through table transformations can focus efforts on valuable data and reduce long-term storage needs. Note that these transformations won’t reduce costs within the minimum data retention period.
Cost Management Practices: Leveraging Azure Cost Management and Billing tools is crucial for gaining insight into usage patterns. These insights inform strategic adjustments, aligning costs with budgetary limits.
1) Ingestion Cost Considerations:
Efficient data ingestion within Azure Monitor and Log Analytics is a balancing act between capturing comprehensive insights and managing costs. This section delves into effective data ingestion strategies for Azure’s IaaS environments, highlighting the prudent use of Data Collection Rules (DCRs) to maintain data insight quality while addressing cost implications.
Data ingestion costs in Azure Log Analytics are incurred at the point of collection, with volume directly affecting expenses. It’s imperative to establish a first line of defense against high costs at this stage. Sampling at the source is critical, ensuring that applications and resources only transmit necessary data. This preliminary filtering sets the stage for cost-effective data management. Within Azure’s environment, DCRs become a pivotal mechanism where this essential data sampling commences. They streamline the collection process by specifying what data is collected and how. However, it’s important to recognize that while DCRs are comprehensive, they may not encompass all types of data or sources. For more nuanced or complex requirements, additional configuration or tools may be necessary beyond the standard scope of DCRs.
In addition:
Navigating Azure Monitor Ingestion in IaaS:
Azure Virtual Machines (VMs) provide a spectrum of logging options, which bear on both the depth of operational insights and the consequent costs. The strategic use of DCRs, in concert with tools like Log Diagnostic settings and Insights, is essential for proficient monitoring and management of VMs.”
A) Log Diagnostic Settings:
When enabling Log Diagnostic Settings in Azure, you are presented with the option to select a Data Collection Rule, although you are not given an option to modify the collection rule, you can access the DCR settings by navigating to the Azure Monitor Service Section. DCRs help tailor what logs and metrics are collected. They support routing diagnostics to Azure Monitor Logs, Storage, or Event Hubs and are valuable for detailed data needs like VM boot logs or performance counters.
To minimize costs with DCRs:
Filter at Source: DCRs can enforce filters to send only pertinent data to the workspace, to modify the filters, Navigate to the Azure Portal, select Azure Monitor, under Settings select Data Collection Rules, select the collection rule you are trying to modify and click on Data Sources, here you can modify what is collected. Some Items such as Microsoft-Perf allows you to add a transformation at this level.
Efficient Collection: DCRs can reduce collection frequency or focus on key metrics, which may require additional insights for complex data patterns. In the Azure portal under the collection rule, select the data source, such as Performance Counters, and here you can adjust the sample rate (frequency) of data collection such as CPU sample rate 60 seconds, adjust the counters based on your need.
Regular Reviews: While DCRs automate some collection practices, manual oversight is still needed to identify and address high-volume sources.
B) Insights (Azure Monitor for VMs):
Purpose: Azure VM Insights is an extension of Azure Monitor designed to deliver a thorough monitoring solution, furnishing detailed performance metrics, visual dependency maps, and vital health statistics for your virtual machines.
Details: Leveraging the Log Analytics agent, Azure VM Insights captures and synthesizes data from your VMs, offering a cohesive dashboard that showcases CPU, memory, disk, and network performance, alongside process details and inter-service dependencies.
Use Cases: Azure VM Insights is pivotal for advanced performance monitoring and diagnostics. It enables the early detection of performance issues, aids in discerning system alterations, and proactively alerts you to potential disruptions before they manifest significantly.
To Enable VM Insights, select the Data Collection Rule which defines the Log analytics workspace to be used.
Cost-saving measures include:
Selective Collection: DCRs ensure only essential metrics are collected, yet understanding which metrics are essential can require nuanced analysis.
Metric Collection Frequency: Adjusting the frequency via DCRs can mitigate overload, but determining optimal intervals may require manual analysis.
Use Automation and Azure policy for Configuration: The cornerstone of scalable and cost-effective monitoring is the implementation of standardized configurations across all your virtual machine (VM) assets. Automation plays a pivotal role in this process, ensuring that monitoring configurations are consistent, error-free, and aligned with organizational policies and compliance requirements.
Azure Policy for Monitoring Consistency: Azure Policy is a service in Azure that you can use to create, assign, and manage policies. These policies enforce different rules over your resources, so those resources stay compliant with your corporate standards and service level agreements. Azure Policy can ensure that all VMs in your subscription have the required monitoring agents installed and configured correctly.
You can define policies that audit or even deploy particular settings like log retention periods and specific diagnostic settings, ensuring compliance and aiding in cost control. For example, a policy could be set to automatically deploy Log Analytics agents to any new VM that is created within a subscription. Another policy might require that certain performance metrics are collected and could audit VMs to ensure that collection is happening as expected. If a VM is found not to be in compliance, Azure Policy can trigger a remediation task that brings the VM into compliance by automatically configuring the correct settings.
C) Logs (Azure Monitor Logs):
Purpose: Azure Monitor Logs are pivotal for storing and analyzing log data in the Log Analytics workspace, leveraging Kusto Query Language (KQL) for complex queries.
Cost Control in Detail: While Azure Monitor Logs are adept at aggregating data from diverse sources, including VMs and application logs, effective cost management is essential. DCRs control the collection of logs for storage and analysis in Log Analytics same collection rules apply.
Azure Monitor Basic Logs: Azure monitor logs offers two log plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor’s advanced features and analytic capabilities based on your needs. The default value of the tables in an Azure Log Analytics Workspace is “Analytics” this plan provides full analysis capabilities and makes log data available for queries, it provides features such as alerts, and use by other services. The plan “Basic” lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. The retention period is fixed at 8 days.
– From the Log Analytics workspace menu select Tables
– Select the context menu for the table you want to configure and select “manage table”
– From the table plan dropdown on the table configuration screen, select “Basic” or Analytics.
– Not all tables support the Basic plan for a list of tables please visit the documentation listed at the end of this document.
– Select Save.
2) Data Retention Strategies:
Effective retention policies play a vital role in cost control. Azure Log Analytics enables the automatic purging of data past certain retention thresholds, avoiding unnecessary storage expenses for data that is no longer needed. Azure Monitor Logs retain data in two states: interactive retention, which lets you retain Analytics logs for interactive queries of up to 2 years, and Archive, which lets you keep older, less used data in your workspace at a reduced cost. You can access data in the archived state by using search jobs and restore you can keep data in archive state for up to 12 years.
Purpose: Implementing well-defined data retention policies is essential to balance the accessibility of historical data with cost management in Azure Log Analytics. The purpose is to retain only the data that adds value to your organization while minimizing storage and associated costs.
Automated Purging: Azure Log Analytics facilitates cost control through automated data purging. Set retention policies to automatically delete data that exceeds your specified retention threshold, ensuring you’re not paying for storage you don’t need.
Retention Policy Design:
Assessment of Data Value: Regularly evaluate the importance of different data types and their relevance over time to determine the appropriate retention periods.
Compliance Considerations: Ensure that retention periods comply with regulatory requirements and organizational data governance policies.
Cost Reduction Techniques:
Reduction in Retention Period: By retaining only necessary data, you reduce the volume of data stored, leading to direct cost savings on storage resources. Some techniques include data purging, data deduplication, data archiving and life-cycle management policies.
Setting the Global Retention Period: Navigate to the Azure portal and select the Log Analytics Workspace. In the Settings, locate Usage and Estimated Costs, select Data Retention and specify the retention period. This will set the retention period globally for all tables in a Log Analytics workspace.
Setting Per Table Retention period:
you can also specify retention periods for each individual table in the Log Analytics Workspace. In the Azure portal navigate and select the Log Analytics Workspace. In the Settings, select Tables, at the end of each table select the three dots and select manage table, here you can change the retention settings for the table. If needed, you can reduce the interactive retention period to as little as four days using the API or CLI.
Interactive and Archive Retention Period:
lets you retain Analytic logs for interactive queries of up to 2 years. From the Log Analytics workspaces menu in the Azure portal, select your workspaces menu, select Tables. Select the context menu for the table you want to configure and select Manage Table. Configure the interactive retention period. i.e. 30 days Configure the Total Retention Period the difference between the interactive period and the total period is the Archive Period. This difference will show up under the configuration menu. Blue for interactive and orange for Archive period.
Automatic Purging data: If you set the data retention period to 30 days, you can purge older data immediately by using the immediatePurgeDataOn30Days parameter in the Azure Resource Manager. Workspaces with a 30-day retention might keep data for 31 days if this parameter is not set.
Data Deduplication: Azure log analytics workspaces does not offer built-in data de-duplication features, however you can implement data duplication as part of the ingestion process before sending the data to Azure Log Analytics using an Azure function or a logic app.
Move older data to Azure Blob using Data export: Data Export in a log analytics workspace lets you continuously export data per selected tables in your workspace. The data can be exported to a storage account or Azure event hubs. Once the data is in a storage account the data can use life-cycle policies. Another benefit of exporting data is that smaller data sets result in quicker query execution times and potentially lower compute costs
3) Optimization Through Transformations:
The primary purpose of data transformations within Azure Log Analytics is to enhance the efficiency of data handling, by honing in on the essential information, thus refining the datasets for better utility. During this process, which occurs within Azure Monitor’s ingestion pipeline, data undergoes transformations after the source delivers it but before it reaches its final destination (LAW). This key step not only serves to reduce data ingestion costs by eliminating extraneous rows and columns but also ensures adherence to privacy standards through the anonymization of sensitive information. By adding layers of context and optimizing for relevance, the transformations offer enriched data quality while simultaneously allowing for granular access control and streamlined cost management.
There are two ways to do transformations, one at the Data Collection Rule level, which means you select only the items you need such as the Windows performance counters from a VM running the Windows OS in Azure, the second option is to do a transformation at the Table-Level in the Azure Log Analytics Workspace (LAW).
Transformation Process:
Data Selection: Transformations are defined in a data collection rule (DCR) and use a Kusto Query Language (KQL) statement that’s applied individually to each entry in the incoming data and create output in the structure expected by the destination.
Table Transformations: Utilize Azure Log Analytics’ Kusto Query Language (KQL) to perform transformations on specific tables within the Azure Log Analytics Workspace. Not all tables support transformations please check the for a complete list.
As an example, to add a table transformation for the ‘events’ table in Azure Log Analytics for cost optimization, you could perform the following steps:
Navigate to the Azure portal
Go to your Log Analytics Workspaces
Select the workspace
Under Settings select Tables.
Under the tables panel select the three dots to the right of the table row and click on “create transformation”
– Select a Data Collection Rule
– Under the Schema and transformation select “Transformation editor”
Source will show all data in the table, and a KQL query will allow you to select and project only the data needed.
source
| where severity == “Critical”
| extend Properties = parse_json(properties)
| project
TimeGenerated = todatetime([“time”]),
Category = category,
StatusDescription = StatusDescription,
EventName = name,
EventId = tostring(Properties.EventId)
Cost Reduction Techniques:
Reduced Storage: Setup Data Collection Rules to only capture the desired data, and setup Table Transformations to only allow data required into the Log Analytics workspace.
Regular Revision: Continuously evaluate and update transformation logic to ensure it reflects the current data landscape and business objectives.
4) Cost Management Practices:
The primary objective in the cost management is finding out where the charges are coming from and figuring out ways to optimize either ingestion at the source, or by adopting some or all the strategies outlined in this document. The primary tool that can be used in Azure is the Azure Cost Management and Billing tool. It is used to obtain a clear and actionable view of your Azure expenditure. These tools provide critical insights into how resources are consumed, enabling informed decision-making for cost optimization. In addition to the strategies outlined already, the following are other Cost and Management techniques:
Cost Control Mechanisms:
Budgets and Alerts: Set up budgets for different projects or services and configure alerts to notify you when spending approaches or exceeds these budgets.
Commitment Tiers: Provide a discount on your workspace ingestion costs when you commit to a specific amount of daily data. Commitment can start at 100GB per day at a 15% discount from the pay-as-you-go pricing and as the amount increases the percent discount grows as well. To take advantage of these navigate to the Azure portal, select log analytic workspaces, select your workspace, under settings select Usage and estimated costs, scroll down to see the available commitment tiers.
Log analytic workspaces placement: thoughtful placement of the Log Analytics Workspaces is important and can significantly impact expenses. Start with a single workspace to simplify management and querying. As your requirements evolve, consider creating multiple workspaces based on specific needs such as compliance. Regional placement should also be considered to avoid egress charges. Creating separate workspaces in each region might reduce egress costs, but consolidating into a single workspace could allow you to benefit from Commitment Tiers and further cost savings.
Implementation Strategies:
Tagging and Grouping: Implement resource tagging to improve visibility and control over cloud costs by logically grouping expenditures.
Cost Allocation: Allocate costs back to departments or projects, encouraging accountability and cost-conscious behavior. To find data volume by Azure Resource, Resource Group, or subscription you can use KQL queries such as the following from the Log Analytics workspace Log section :
find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project ResourceId, IsBillable | where IsBillable == true
In conclusion, this document has provided a structured approach to cost optimization in Azure, specifically for services related to Azure Monitor and Log Analytics. Through careful planning of ingestion strategies, data retention policies, transformative data practices, and prudent cost management practices, organizations can significantly reduce their cloud expenditures without sacrificing the depth and integrity of their analytics. Each section outlined actionable insights, from filtering and sampling data at ingestion to employing intelligent retention and transformation strategies, all aimed at achieving a cost-effective yet robust Azure logging environment. By consistently applying these strategies and regularly reviewing usage and cost patterns with Azure Cost Management tools, businesses can ensure their cloud operations remain within budgetary constraints while maintaining high performance and compliance standards.
Recent Comments