How to use Azure OpenAI Playgrounds to experiment with Chatbots

How to use Azure OpenAI Playgrounds to experiment with Chatbots

This article is contributed. See the original author and article here.





1. Navigate to https://portal.azure.com/#home


2. Click “Azure OpenAI”



3. Click to open an existing Open AI Resource


Screenshot of: Click to open an existing Open AI Resource


4. Click “Go to Azure OpenAI Studio”.



5. Click here to open the Chat playground.


Screenshot of: Click here to open the Chat playground.


6. Click “Select a template”



7. Click “IRS tax chatbot”



8. Click “Continue”



9. Click the “User message” field.



10. Talk to the bot and ask it some questions.




Announcing the Public Preview of Code Optimizations

Announcing the Public Preview of Code Optimizations

This article is contributed. See the original author and article here.

Code Optimizations: A New AI-Based Service for .NET Performance Optimization


We are thrilled to announce that Code Optimizations (previously known as Optimization Insights) is now available in public preview! This new AI-based service can identify performance issues and offer recommendations specifically tailored for .NET applications and cloud services.


 


What is Code Optimizations?


Code Optimizations is a service within Application Insights that continuously analyzes profiler traces from your application or cloud service and provides insights and recommendations on how to improve its performance.


 


Code Optimizations can help you identify and solve a wide range of performance issues, ranging from incorrect API usages and unnecessary allocations all the way to issues relating to exceptions and concurrency. It can also detect anomalies whenever your application or cloud service exhibits abnormal CPU or Memory behavior.


 


Code Optimizations PageCode Optimizations Page


 


Why should I use Code Optimizations?


Code Optimizations can help you optimize the performance of your .NET applications and cloud services by:



  • Saving you time and effort: Instead of manually sifting through gigabytes of profiler data or relying on trial-and-error methods, you can use Code Optimizations to automatically uncover complex performance bugs and get guidance on how to solve them.

  • Improving your user experience: By improving the speed and reliability of your application or cloud service, you can enhance your user satisfaction and retention rates. This can also help you gain a competitive edge over other apps or services in your market.

  • Saving you money: By fixing performance issues early and efficiently, you can reduce the need for scaling out cloud resources or paying for unnecessary compute power. This can help you avoid problems such as cloud sprawling or overspending on your Azure bill.


How does Code Optimizations work?


Code Optimizations relies on an AI model trained on thousands of traces collected from Microsoft-owned services around the globe. By learning from these traces, the model can glean patterns corresponding to various performance issues seen in .NET applications and learn from the expertise of performance engineers at Microsoft. This enables our AI model to pinpoint with accuracy a wide range of performance issues in your app and provide you with actionable recommendations on how to fix them.


 


Code Optimizations runs at no additional cost to you and is completely offline to the app. It has no impact on your app’s performance.


 


How can I use Code Optimizations?


If you are interested in trying out this new service for free during its public preview period, you can access it using the following steps:



  1. Sign up for Application Insights if you haven’t already. Application Insights is a powerful application performance monitoring (APM) tool that helps you monitor, diagnose, and troubleshoot your apps.

  2. Enable profiling for your .NET app or cloud service. Profiling collects detailed information about how your app executes at runtime.

  3. Navigate to the Application Insights Performance blade from the left navigation pane under Investigate and select Code Optimizations from the top menu.


 

Link to Code Optimizations from Application Insights: PerformanceLink to Code Optimizations from Application Insights: Performance


 


Click here for the documentation.


Click here for information on troubleshooting.


Fill out this quick survey if you have any additional issues or questions.


 

Automatically disrupt adversary-in-the-middle (AiTM) attacks with XDR

Automatically disrupt adversary-in-the-middle (AiTM) attacks with XDR

This article is contributed. See the original author and article here.

Microsoft has been on a journey to harness the power of artificial intelligence to help security teams scale more effectively. Microsoft 365 Defender correlates millions of signals across endpoints, identities, emails, collaboration tools, and SaaS apps to identify active attacks and compromised assets in an organization’s environment. Last year, we introduced automatic attack disruption, which uses these correlated insights and powerful AI models to stop some of the most sophisticated attack techniques while in progress to limit lateral movement and damage.  


 


Today, we are excited to announce the expansion of automatic attack disruption to include adversary-in-the-middle attacks (AiTM) attacks, in an addition to the previously announced public preview for business email compromise (BEC) and human-operated ransomware attacks.


 


AiTM attacks are a widespread and can pose a major risk to organizations. We are observing a rising trend in the availability of adversary-in-the-middle (AiTM) phishing kits for purchase or rent, with our data showing that over organizations have already been attacked in 2023.


 


During AiTM attacks (Figure 1), a phished user interacts with an impersonated site created by the attacker. This allows the attacker to intercept credentials and session cookies and bypass multifactor authentication (MFA), which can then be used to initiate other attacks such as BEC and credential harvesting. 


 


Automatic attack disruption does not require any pre-configuration by the SOC team. Instead, it’s built in as a capability in Microsoft’s XDR.


Figure 1. Example of an AiTM phishing campaign that led to a BEC attackFigure 1. Example of an AiTM phishing campaign that led to a BEC attack


 


How Microsoft’s XDR automatically contains AiTM attacks


Similarly to attack disruption of BEC and human-operated ransomware attacks, the goal is to contain the attack as early as possible while it is active in an organization’s environment and reduce its potential damage to the organization. AiTM attack disruption works as follows:


 



  1. High-confidence identification of an AiTM attack based on multiple, correlated Microsoft 365 Defender signals.

  2. Automatic response is triggered that disables the compromised user account in Active Directory and Azure Active Directory.

  3. The stolen session cookie will be automatically revoked, preventing the attacker from using it for additional malicious activity.


Figure 2. An example of a contained AiTM incident, with attack disruption tagFigure 2. An example of a contained AiTM incident, with attack disruption tag


 


To ensure SOC teams have full control, they can configure automatic attack disruption and easily revert any action from the Microsoft 365 Defender portal. See our documentation for more details.


 


Get started



  1. Make sure your organization fulfills the Microsoft 365 Defender pre-requisites

  2. Connect Microsoft Defender for Cloud Apps to Microsoft 365.

  3. Deploy Defender for Endpoint. A free trial is available here.

  4. Deploy Microsoft Defender for Identity. You can start a free trial here.


Learn more


Announcing the General Availability of Azure Monitor HCI Insights

Announcing the General Availability of Azure Monitor HCI Insights

This article is contributed. See the original author and article here.

Introduction 


Earlier in May 2022, we launched Azure Monitor HCI Insights for public preview. Based on customer feedback during the preview, we improved the performance of the workbooks and supported the new Azure Monitor Agent and are excited to announce General Availability (GA) of Azure Monitor HCI Insights. 


 


What is HCI Insights? 


Azure Stack HCI Insights is an interactive, fully integrated service which provides health, performance, and usage insights about Azure Stack HCI clusters that are connected to Azure and are enrolled in Azure Monitor. In Microsoft Azure, you can see all your resources in Azure portal and monitor them with Azure Stack HCI Insights. 


 


There are some key benefits of using Azure Stack HCI Insights: 



  • It’s managed by Azure and accessed from Azure portal, so it’s always up to date, and there’s no database or special software setup required.  

  • Azure Monitor Agent uses managed identity to interact with Log analytics workspace which ensures secure communication.  

  • It’s highly scalable, which means it is capable of loading more than 250 cluster information sets across multiple subscriptions at a time, with no boundary limitations on cluster, domain, or physical location.  

  • It’s highly customizable. The user experience is built on top of Azure Monitor workbook templates, where you can easily add/remove/edit visualizations and queries. 

  • HCI Insights follows Pay-as-you-go model which means you pay only for the logs that are collected and they can be removed/edited as per user need. 


What’s new in GA? 


The new, enhancedAzure Monitor HCI Insights uses the new improved Azure Monitor Agent and Data Collection Rule. These rules specify the event logs and performance counters that need to be collected and stores it in a Log Analytics workspace. Once the logs are collected, HCI Insights uses Azure Monitor Workbooks to provide deeper insights on the health, performance and usage of the cluster. 


 


There are a few prerequisites for using Azure Stack HCI Insights:  



  • Azure Stack HCI cluster should be registered with Azure and Arc-enabled. If you registered your cluster on or after June 15, 2021, this happens by default. Otherwise, you’ll need to enable Azure Arc integration.  

  • The cluster must have Azure Stack HCI version 22H2 and the May 2023 cumulative update or later installed.  

  • Enable the managed identity for the Azure resource. For more information, see Enabled enhanced management.  


Below is a screenshot of the Azure workbook displayed for multiple clusters.  


saniya0307_7-1684194815855.png


You can click on the cluster name, and it will redirect you to the single cluster workbook template with a drill down view and more details as shown below: 


saniya0307_5-1684194522311.png


Pre-defined workbook templates exist with default views to give you a head-start. You can switch between different tabs like Health, Servers, Virtual machines, and Storage. Each tab provides data and metrics about the cluster which is carefully designed keeping your needs in mind. Health data such as faults and resource status, performance data like IOPS and throughput, and usage data like CPU usage and memory usage are collected. Moreover, the rich visualizations make it easier to decipher the data and give a quick glance of useful insights.  



Additional data can be easily collected in the form of event logs or performance counters, and you can add it to the Data collection rule that was created while enabling monitoring for the cluster.   Once the data starts flowing, the user can use Azure workbooks to visualize the collected data.  A workbook provides a set of visualizations like charts, graphs, grids, honeycomb, composite bar, maps etc. and it is very convenient to modify and alter. It allows you to pin the graphs to Azure dashboards which gives a holistic view of resource health, performance, and usage. It is also very easy to share the data by downloading this information in Excel and deriving useful insights.  


 


Customers also use logs and Insights workbook templates to create alerts. Some of the common alerts created by customers are if cluster node is down, and if CPU or memory usage exceeds set threshold. You can set up alerts for multiple clusters and integrate 3rd party solutions like PagerDuty to get notified.  This will make sure that you take timely action and resources are healthy and performant. 


saniya0307_6-1684194572452.png


Here is a video with more details. 


 


Future plans 


This is just the beginning of Monitoring Insights for Azure Stack HCI. We plan to build additional workbook templates for new HCI features and essential monitoring capabilities. If you have feedback, please send it to  hcimonitoring@microsoft.com!  


 


For more detailed information, please visit our documentation for Single Cluster Insights and Multiple Cluster Insights . 


 

Split recurring contract billing across multiple customers 

Split recurring contract billing across multiple customers 

This article is contributed. See the original author and article here.

A key component of the Subscription billing feature is the Recurring contract billing module. Recurring contract billing allows customers to manage their recurring billing contracts through billing schedules which contain the financial details of a contract. Recurring billing contracts can be managed across one or many customers based on how a contract is drafted. The new feature Customer split allows Dynamics 365 Finance users to split billing schedules across multiple customers based on a percentage of the invoice. This feature reduces the risk of incorrect billing, as a single billing schedule can manage the billing for all customers that are to be billed.

What is Customer Split

Customer split allows a single billing schedule to be billed across multiple customers. For example, let’s consider a scenario where a contract should be billed to two customers: one is responsible for 60% of the bill and the other is responsible for 40%. Customer split allows users to configure a scenario such as this and reduce additional manual entry as well as reduce risk of inaccurate billing. 

The feature is enabled by setting the Customer split parameter in Recurring contract billing parameters page to Yes. 

 
Once the Customer split feature has been enabled in the Recurring contract billing parameters, the customer split can be set up on a billing schedule. The billing schedule header contains the primary customer responsible for the invoice, including the Bill to address on the Address tab. 

The Customer split option under Billing schedule in the action pane can be used to add additional customers and their responsibility for the bill at the header level. Customer split can also be added on a line-by-line basis.  

When creating the record for the customer split, the billing schedule parent customer will get billed the remainder of what is not defined. In our example, that will be 60%. When defining the customer split a start date, end date, customer reference, customer requisition, end user account, end user name, delivery address, and bill to address can be entered. 

When generating invoices for a billing schedule that has customer split defined, a sales order will be created for each customer defined in the customer split as well as the billing schedule header customer. 

Customer split is available on a billing schedule or billing schedule lines when: 

  • Billing schedules have an Invoice transaction type of Sales order
  • Billing schedule line is a service item 
  • Billing schedule is not linked to a project 
  • Billing schedule line is not configured for unbilled revenue

How to get started 

This functionality is available in 10.0.29 and later of Dynamics 365 Finance.  
Read the documentation for a more detailed look at the feature: Customer split on billing schedules.

The post Split recurring contract billing across multiple customers  appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

New transactable offers from Signly, Tessell, and Varonis in Azure Marketplace

New transactable offers from Signly, Tessell, and Varonis in Azure Marketplace

This article is contributed. See the original author and article here.

Microsoft partners like Signly, Tessell, and Varonis deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:


 

















Signly.png

Signly SLaaS: Signly sign language as a service (SLaaS), a fully managed solution powered by Microsoft Azure, makes it easy to provide access to sign language by capturing the text of a web page and sending it to highly qualified deaf sign language translators. Translated content is then available for all users, enabling website owners to provide improved service for deaf customers.


Tessell.png

Tessell – Migrate and Manage Oracle on Azure: Tessell is a fully managed database as a service (DBaaS) designed to enable Oracle databases to thrive on Microsoft Azure by delivering enterprise-grade functionality coupled with consumer-grade experience. Tessell makes deploying Oracle databases on Azure simple and elegant, taking care of your data infrastructure and data management needs for both Oracle Enterprise Edition and Standard Edition 2.


Varonis.png

Varonis – Find, Monitor, and Protect Sensitive Data: Is your midsize or large organization trying to understand where your sensitive data is, who has access to it, and what users are doing with it? The Varonis platform protects your data with low-touch, accurate security outcomes by classifying more data, revoking permissions, enforcing policies, and triggering alerts for the Varonis incident response team to review on your behalf.


3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity

3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity

This article is contributed. See the original author and article here.

Enterprises are increasingly turning to collaborative apps to enhance workplace engagement and productivity. That presents an opportunity for independent software vendors (ISVs) to earn customer loyalty by building easily accessible enterprise apps with rich features that deliver business value.

The post 3 ways collaborative apps like Workday in Microsoft Teams boost engagement and productivity appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Propelling the Aerodynamics of Enterprise Innovation: Announcing the Microsoft AI SDK for SAP ABAP

Propelling the Aerodynamics of Enterprise Innovation: Announcing the Microsoft AI SDK for SAP ABAP

This article is contributed. See the original author and article here.

 Linkedin_Banner.jpg


 


 


We are excited to announce the launch of Microsoft AI SDK for SAP ABAP. This software development kit (SDK) is designed to provide SAP ABAP developers with the tools they need to create intelligent enterprise applications using Artificial Intelligence (AI) technologies.


 





















Git Repository Location AI SDK for SAP ABAP (github.com)
Documentation AI SDK for SAP Documentation 
Discussions Discussions · GitHub
Issues AI SDK for SAP ABAP: Issue Reporting

 


Engineered with a deep understanding of developers’ needs, the Microsoft AI SDK for SAP ABAP presents an intuitive interface that effortlessly brings AI capabilities to your ABAP applications. This toolkit offers an exciting avenue to tap into the power of Azure OpenAI. And this is just the beginning — our commitment to progress promises the inclusion of even more AI engines in future versions.


 


Azure OpenAI, the crown jewel of Microsoft Azure’s offerings, is a powerhouse of AI services and tools. It is your passport to harnessing machine learning algorithms, leveraging advanced natural language processing tools, and exploring versatile cognitive services. Its vast suite of tools paves the way for the creation of intelligent applications that excel in pattern detection, natural language processing, and data-driven predictions. Azure OpenAI grants you access to an array of pre-built AI models and algorithms, along with custom model training and deployment tools, all under the umbrella of stringent security, compliance, and data privacy standards.


 


With the AI SDK for SAP ABAP and Azure OpenAI integration with SAP, developers are on the brink of a new frontier. Now you have the power to craft innovative applications that can revolutionize the enterprise landscape by automating mundane tasks, bolstering smarter business decisions, and providing a more personalized customer experience. It’s more than a development kit — it’s your passport to an exciting future of technological evolution for enterprises running on the SAP platform.


 


Features:


The Microsoft AI SDK for SAP ABAP v1.0 is not just a toolset, it’s an innovation accelerator, an efficiency propellant. Designed for ABAP developers, it supercharges their workflows, slashing the time taken to integrate cutting-edge AI capabilities. With its streamlined integration process and ABAP-ready data types, developers can fast-track their tasks and concentrate on their real mission – crafting intelligent, transformative applications. This is no ordinary toolkit; it’s your express lane to the future of enterprise software development.


 



  • Extensive Capabilities: It provides a comprehensive suite of functionalities, including Models, Deployment, Files, Fine-Tuning, and Completion (GPT3), along with Chat Completion (GPT4) capabilities.


  • ABAP-Ready Data Types: We’ve simplified the integration process for ABAP developers by offering ABAP-ready data types. This feature substantially lowers the entry barriers, enabling developers to leverage the SDK with ease.


  • Azure OpenAI Support: The SDK is fully compatible with Azure OpenAI, ensuring seamless integration and performance.


  • Enterprise Control: To safeguard sensitive data, we’ve incorporated a robust enterprise control mechanism, offering three levels of control granularity. Enterprises can effectively manage SDK usage by implementing policies to permit or block specific functionalities. For instance, an organization could use authorizations to designate a user group capable of performing setup operations (Deployment, Files, and Fine-Tuning), while enabling all users to utilize the Completions functionality.


  • Flexible Authentication: The SDK supports authentication using either Azure OpenAI Keys or Azure Active Directory (AAD), providing users with a secure and flexible approach to authentication.


 


In this age of relentless technological progress, AI is undeniably the cornerstone of enterprise software development’s future. The Microsoft AI SDK for SAP ABAP is a dynamic and transformative tool, purpose-built for SAP professionals. It’s not just a toolkit; it’s a supercharger for your innovative instincts, enabling you to build intelligent, data-centric applications. Our aim is to help businesses stay nimble and competitive in a marketplace where the pace of innovation is breakneck.


The launch of the Microsoft AI SDK for SAP ABAP is a leap into the future. It encapsulates our commitment to fostering the symbiotic relationship between technology and business, nurturing an environment where the opportunities for innovation are limitless. As we unfurl this state-of-the-art tool, we can’t wait to see the inventive applications that you, the talented developers working within the SAP ecosystem, will craft. The potential is staggering, poised to redefine how businesses operate and flourish.


 


And our commitment doesn’t stop at providing you with the tools. We pledge unwavering support on your journey of discovery and innovation with the Microsoft AI SDK for SAP ABAP. We’re with you every step of the way — to guide, support, and celebrate as you traverse this transformative technological landscape. Let’s stride boldly together into this new era of intelligent, data-driven enterprise solutions. The future is here, and it’s brighter than ever.


 


Best Regards,


Gopal Nair –  Principal Software Engineer, Microsoft, – Author


Amit Lal – Principal Technical Specialist, Microsoft  – Contributor


 


Join us and share your feedback: Azure Feedback




#MicrosoftAISDK #AISDKforSAPABAP #EnterpriseGPT #GPT4 #AzureOpenAI #SAPonAzure #SAPABAP


 


Disclaimer: The announcement of the Microsoft AI SDK for SAP ABAP is intended for informational purposes only. Microsoft reserves the right to make adjustments or changes to the product, its features, availability, and pricing at any time without prior notice. This blog does not constitute a legally binding offer or guarantee of specific functionalities or performance characteristics. Please refer to the official product documentation and agreements for detailed information about the product and its use. Microsoft is deeply committed to the responsible use of AI technologies. It is recommended to review and comply with all applicable laws, regulations, and organizational policies to ensure the responsible and ethical use of AI.

Azure Policy Violation Alert using Logic apps

Azure Policy Violation Alert using Logic apps

This article is contributed. See the original author and article here.

For Azure log alert notification action using logic app, we have read numerous articles.  But I notice that most of them are either very brief or don’t go into great detail about all the nuances, tips, or tricks.  I therefore wanted to write one with as much detail as I could and some fresh additional strategies.  I hope this aids in developing the logic and putting it into practise.


 


So let’s get going.   We already know that creating the Alert rule and choosing the logic app as the action are necessary.  Additionally, the Logic app’s alert notification trigger for when an HTTP request is received.  So let’s construct one.


 


Vineeth_Marar_0-1683951928868.png


 


We can use the below sample schema for the above trigger task


 


 


{


    “type”: “object”,


    “properties”: {


        “schemaId”: {


            “type”: “string”


        },


        “data”: {


            “type”: “object”,


            “properties”: {


                “essentials”: {


                    “type”: “object”,


                    “properties”: {


                        “alertId”: {


                            “type”: “string”


                        },


                        “alertRule”: {


                            “type”: “string”


                        },


                        “severity”: {


                            “type”: “string”


                        },


                        “signalType”: {


                            “type”: “string”


                        },


                        “monitorCondition”: {


                            “type”: “string”


                        },


                        “monitoringService”: {


                            “type”: “string”


                        },


                        “alertTargetIDs”: {


                            “type”: “array”,


                            “items”: {


                                “type”: “string”


                            }


                        },


                        “configurationItems”: {


                            “type”: “array”,


                            “items”: {


                                “type”: “string”


                            }


                        },


                        “originAlertId”: {


                            “type”: “string”


                        },


                        “firedDateTime”: {


                            “type”: “string”


                        },


                        “description”: {


                            “type”: “string”


                        },


                        “essentialsVersion”: {


                            “type”: “string”


                        },


                        “alertContextVersion”: {


                            “type”: “string”


                        }


                    }


                },


                “alertContext”: {


                    “type”: “object”,


                    “properties”: {


                        “properties”: {},


                        “conditionType”: {


                            “type”: “string”


                        },


                        “condition”: {


                            “type”: “object”,


                            “properties”: {


                                “windowSize”: {


                                    “type”: “string”


                                },


                                “allOf”: {


                                    “type”: “array”,


                                    “items”: {


                                        “type”: “object”,


                                        “properties”: {


                                            “searchQuery”: {


                                                “type”: “string”


                                            },


                                            “metricMeasureColumn”: {},


                                            “targetResourceTypes”: {


                                                “type”: “string”


                                            },


                                            “operator”: {


                                                “type”: “string”


                                            },


                                            “threshold”: {


                                                “type”: “string”


                                            },


                                            “timeAggregation”: {


                                                “type”: “string”


                                            },


                                            “dimensions”: {


                                                “type”: “array”


                                            },


                                            “metricValue”: {


                                                “type”: “integer”


                                            },


                                            “failingPeriods”: {


                                                “type”: “object”,


                                                “properties”: {


                                                    “numberOfEvaluationPeriods”: {


                                                        “type”: “integer”


                                                    },


                                                    “minFailingPeriodsToAlert”: {


                                                        “type”: “integer”


                                                    }


                                                }


                                            },


                                            “linkToSearchResultsUI”: {


                                                “type”: “string”


                                            },


                                            “linkToFilteredSearchResultsUI”: {


                                                “type”: “string”


                                            },


                                            “linkToSearchResultsAPI”: {


                                                “type”: “string”


                                            },


                                            “linkToFilteredSearchResultsAPI”: {


                                                “type”: “string”


                                            }


                                        },


                                        “required”: [


                                            “searchQuery”,


                                            “metricMeasureColumn”,


                                            “targetResourceTypes”,


                                            “operator”,


                                            “threshold”,


                                            “timeAggregation”,


                                            “dimensions”,


                                            “metricValue”,


                                            “failingPeriods”,


                                            “linkToSearchResultsUI”,


                                            “linkToFilteredSearchResultsUI”,


                                            “linkToSearchResultsAPI”,


                                            “linkToFilteredSearchResultsAPI”


                                        ]


                                    }


                                },


                                “windowStartTime”: {


                                    “type”: “string”


                                },


                                “windowEndTime”: {


                                    “type”: “string”


                                }


                            }


                        }


                    }


                },


                “customProperties”: {}


            }


        }


    }


}


 


 


However, as can be seen, the output above is insufficient to provide a thorough error message for the notification.  In order to receive the message, we must perform additional tasks.  


 


The same query that was used in the Alert rule can be run again with additional filtering options to produce the error code and message shown below.


 


Vineeth_Marar_1-1683951928872.png


 


 


 


The aforementioned query serves as an example of how to extract the error message using multiple iterations from the Properties field.


 


Now, initialise the variables as shown below.


 


Vineeth_Marar_2-1683951928875.png


 


By choosing type as “String,” we must create 4 “Initialise Variable” tasks for “Runquery,” “Owner,” “HTMLtable,” and “Authorise.”


 


Also keep in mind that the List query result may contain multiple logs.  Therefore, we will use a foreach loop to go through each error log one at a time and send notifications for each one.  Let’s create the following foreach task to accomplish that.


 


Vineeth_Marar_3-1683951928877.png


 


The result of the “Run query and list result” task is the value.


 


The next step is to retrieve the current Log from a variable we previously initialised.  Let’s now set the value for that variable using the current item from the Foreach task.


 


Vineeth_Marar_4-1683951928879.png


 


The value is the output of the “Run query and list result” task.


 


The next step is to get the most recent Log from a variable that was initialised earlier.  Using the current item from the Foreach task, let’s now set the value for that variable.


 


So that we can obtain the field values in subsequent tasks, parse this variable into JSON.   You can simply run the Logicapp to obtain the output for the aforementioned variable in order to obtain the schema for this task.  Next, duplicate that output and paste it into the sample payload link for the task below.


 


Vineeth_Marar_5-1683951928881.png


 


 


Our actual strategy is to e-mail or notify each error log.  In this instance, the owner of the subscription will receive the email containing the reported error or violation.


 


We must make sure the logs are captured, which is done in the Alert rule window itself, because the query will be run once more after the alert.  So let’s add a requirement to only gather those logs.  


 


In order to ensure that the TimeGenerated field is between the Alert rule (trigger task), we will create a Condition task and choose it from the aforementioned “Parse JSON task.” Windows commencement and termination


 


Vineeth_Marar_6-1683951928884.png


 


Now If it is accurate, we can move on to obtaining the owner user or users’ information.  However, let’s use HTTP action for API GET call if you have numerous subscriptions and want to display Subscription Name in your Notification as well.   Use the API link as shown below and the SubscriptionID from the Current query Parse JSON task.


 


Vineeth_Marar_7-1683951928887.png


 


You can choose Managed Identity (of Logicapp) as your authentication type.  You can choose Identity from the main menu list in the Logicapp, enable Managed Identity, and grant Reader permission for each subscription before setting this task.


 


Run the logicapp now to obtain the results of the aforementioned API request.  To have the attributes of a subscription, copy the output and paste it into the sample payload for the subsequent Parse JSON task.


 


Vineeth_Marar_8-1683951928890.png


 


 


The Owners must now be filtered by the Subscription Users.  Let’s make another HTTP action for the API GET task to accomplish that.


 


Vineeth_Marar_9-1683951928893.png


 


 


Let’s run the logicapp once more to obtain the results of this API task, then copy and paste them into the sample payload for the subsequent Parse JSON task in order to obtain the schema.   Make sure the Content you choose is the API task’s body from above.


 


Vineeth_Marar_10-1683951928896.png


 


 


We currently have access to every subscription for the current log.   To send the notification, however, we only need the Owner user.  Therefore, we must use the Foreach task once more to filter each user and find the Owner user.   The output of the previous parse JSON task serves as the Value for this.


 


Vineeth_Marar_11-1683951928898.png


 


Let’s now enter the details of the current user into a variable.  Keep in mind that we previously initialised the variable “owner.”  To set the value for it, create a Set Variable task now.   Make sure the value represents the result of the previous foreach task.


 


Vineeth_Marar_12-1683951928900.png


 


To get the attribute values of the current user for later use, we must now parse the Variable into JSON.


 


Vineeth_Marar_13-1683951928902.png


 


 


To obtain the output of the aforementioned variable and obtain the schema, we must once again run the logicapp and copy/paste the results to the sample payload link above.


 


To identify the Owner user, we must now obtain the Owner’s Role AssignmentID (which is common in Azure).  To obtain the Role Assignment ID, go to your subscription’s IAM (Access Control), click Role Assignments, then select any Owner and the JSON tab.   However, you can also use PowerShell/CLI.   Alternately, you can use the logicapp to validate the owner’s role assignment ID after receiving the output of the aforementioned “Parse JSON for Users” task.  For future use, copy that.


 


Vineeth_Marar_14-1683951928904.png


 


 


Vineeth_Marar_15-1683951928906.png


 


 


The ID guid value can also be copied from the ID value.


 


To select only the Owner user for subsequent tasks, we must now create a Condition task to filter the users.   The ID field from the task “Parse JSON for current user” should be used as the condition field.


 


Vineeth_Marar_16-1683951928908.png


 


 


The most crucial thing to keep in mind right now is that we must run a Graph API query in order to obtain user attributes like email and UPN, etc.  For obtaining those attributes, the results of the current API queries are insufficient.   But we need the following permission in the AAD in order to access those users’ attributes.  The SPN (app registration) must be created, the following API permissions must be provided, and admin consent must be granted.


 


















































Permission



Type



Directory.AccessAsUser.All



Delegated



Directory.ReadWrite.All



Delegated



Directory.ReadWrite.All



Application



Group.ReadWrite.All



Delegated



Group.ReadWrite.All



Application



User.Read



Delegated



User.Read.All



Delegated



User.Read.All



Application



User.ReadWrite.All



Delegated



User.ReadWrite.All



Application



 


Additionally, duplicate the App ID and Tenant ID, make a secret, and copy the Secret key for the subsequent task.


 


To run a Graph API query, we must now execute the following HTTP action for API task.  To obtain information about the current user, use the ‘Parse Json for current user’s principalID’ command.


 


Vineeth_Marar_17-1683951928910.png


 


 


Choose the Authentication parameter and enter the SPN-copied Tenant ID, App ID (Client ID), and Secret.


 


For the output from the aforementioned API, create a new “Parse JSON” task.  To obtain the output of the aforementioned task’s sample payload to paste into the parse json task’s payload to obtain the schema, we can run the logicapp once more.


 


Vineeth_Marar_18-1683951928913.png


 


 


We should now have a good format for the notification to appear in the email.  We’ll use an HTML table for that, filled with information from the query above (such as the error code, error message, severity, and subname).  Although you are free to use your own format, you can use the sample provided by this github link (attached below) as a guide.  You must choose the HTML table (the initialise variable we created earlier) and use the ‘Set Variable’ task to paste the value from the example HTML code I’ve attached below.


 


<>


 


Vineeth_Marar_19-1683951928915.png


 


 


 


Update the fields/values as indicated below in the code at the appropriate lines/locations.


 


Vineeth_Marar_20-1683951928917.png


 


 


 


 


After that, a task called Send email V2 can be created in Outlook 365 to send the notification.


 


Vineeth_Marar_21-1683951928919.png


 


 


 


You will receive an email as below.


 


Vineeth_Marar_22-1683951928939.png


 


 


Before we go any further, make sure your Alert rule in Azure Monitor has been created and the aforementioned logicapp has been selected as the action. Make sure the error/administration diagnostic logs are enabled to send to the Log analytics workspace for all subscriptions.   If you want to set up separate alert rules for “Error” and “Critical,” create them separately and choose the same logicapp as the action.  Here is just a sample.


 


Vineeth_Marar_23-1683951928942.png


 


 


And the Condition query should be as below (you can modify as per your requirement)


 


Vineeth_Marar_24-1683951928945.png


 


 


 


The evaluation of the log analytics workspace (activity logs) will be performed every 5 minutes, and if any policy violation errors are discovered, an alert will be sent.  The Logic app will be activated as soon as the Alert is fired, and the Owner of the resource subscription will receive a notification email in the format shown above with all necessary information.


 


Hope you had a great reading and happy learning. 

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1  

Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1  

This article is contributed. See the original author and article here.

One aspect of Dynamics 365 Field Service’s adaptability lies in the flexibility of its ‘status’ functionality, driving a meaningful work order lifecycle tailored to each organization. Field Service helps manage work orders and bookings across diverse use cases. This blog series explores critical status concepts: 

  • System Status and Substatus for Work Orders 
  • Booking Status and Field Service Status for Bookings 
  • Booking Status Impact on Work Orders Across Single and Multi-Booking Scenarios 

Grasping these concepts allows organizations to leverage the solution’s functionality, optimize field service processes, and ultimately provide better customer service. 

This blog will expand upon many of the concepts discussed in the existing work order and booking status documentation: Work order life cycle and statuses – Dynamics 365 Field Service | Microsoft Learn 

Work Order Status Concepts: System Status and Substatus 

System Status 

Work orders in Dynamics 365 Field Service have a column called System Status which helps organizations manage their field service processes efficiently. There are six values available for System Status: 

  • Unscheduled: Work order has been created, but resources have not been assigned. 
  • Scheduled: Work order has resources assigned and is ready for execution. 
  • In Progress: Work order is currently being executed by the assigned resources. 
  • Completed: Work order has been executed and finished. 
  • Posted: Work order has been invoiced and is now closed. 
  • Cancelled: Work order has been cancelled and will not be executed. 

As the documentation highlights, an organization must use this field as is because these values allow the FS solution to interpret the current state of the work order record and apply appropriate behaviors and validations. If this list is changed, it could cause many issues both immediately and unanticipated, down the line. New values in this list would not be interpretable by the solution. However, the Field Service solution has a powerful related concept that provides infinite flexibility which can be mapped directly to these System Status values. 

Substatus 

In Dynamics 365 Field Service, the Substatus table plays a crucial role in providing organizations with the ability to create meaningful states that are mapped to System Statuses. One noteworthy feature of the Substatus table is the option to define a “Default” Substatus for each mapped System Status. This default Substatus will be automatically applied when a work order transitions into the corresponding System Status through the out-of-the-box (OOTB) logic.  

The Default Substatus feature within the Substatus table allows organizations to streamline their work order management process by automatically applying a predefined Substatus when a work order moves into a particular System Status using the out-of-the-box logic. This helps ensure consistency across work orders while still allowing for customization and adaptability when needed.

For example, if your organization has a default Substatus of “Pending Customer Confirmation” for the System Status “Scheduled,” any work order that moves into the “Scheduled” System Status due to the standard logic will automatically be assigned the “Pending Customer Confirmation” Substatus. This helps maintain consistency and simplify the management of work orders, especially when dealing with a high volume of work orders. 

It’s important to note that if a work order already has a different Substatus applied within the same System Status, the default Substatus will not be applied. This means that, if an organization adds custom logic to set a substatus, the default logic will not override it. The existing Substatus will remain in place, allowing organizations to maintain flexibility and customization for specific work order situations. It is also worth noting that any custom logic is still subject the allowed System Status validations (for example: the Work Order cannot be forced into a Scheduled System Status if there are no bookings). 

Further, direct updates to the Substatus field will drive updates to the work order’s System Status, within the allowable System Status changes. For example, using some of the Substatus records proposed below, if a work order is in the “Technician Assignment Pending” Substatus, which maps to the Unscheduled System Status, a user could change the Substatus directly to “Customer Canceled” which will immediately move the System Status of the work order to Canceled. It is worth noting that the default form UI should filter the available Substatus values to allowed changes, based on the current state of the work order and the mapped System Status of the Substatus. In this example, none of the Subtatuses which mapped to the Scheduled or In Progress System Statuses would have shown up in the UI. They would have been dynamically filtered out so that a user couldn’t make a choice that wouldn’t have been allowed. 

Example: Substatus Records with Mapped System Status 

The following are an example set of possible meaningful Work Order Substatuses. These Substatuses will be mapped to the appropriate System Status to help drive actions and behaviors in the system while communicating meaningful information to anyone who looks at the work order in Dynamics 365 Field Service. 

Substatus  Mapped System Status  Default Substatus  What it communicates to the user who glances at the work order 
Technician Assignment Pending  Unscheduled  Yes  The work order has been created, but no technician has been assigned yet. 
Awaiting Parts  Unscheduled  No  The work order requires special parts which are on order and the required parts are pending delivery to the job site or warehouse. 
Pending Customer Confirmation  Scheduled  Yes  The work order has been tentatively scheduled and the booking is awaiting confirmation from the customer regarding their preferred appointment time or other necessary details. 
Appointment Confirmed  Scheduled  No  The work order has been scheduled and the customer has confirmed the appointment time. Every reasonable effort should be made to meet the commitment made with the customer. 
Remote Support Scheduled  Scheduled  No  What is communicates to the user who glances at the work order: The work order has been scheduled for remote support, such as a software installation or configuration. 
Service In Progress  In Progress  Yes  The technician is performing the service. 
Work Order Completed Successful  Completed  Yes  The work order has been successfully completed, and the scope of the work order has been resolved. 
Work Order Unresolved  Completed  No  The bookings have been completed, but the scope of the work order has not be resolved. Additional action may be required, such as escalating the issue to a higher-level technician or recommending alternative solutions to the customer. 
Work Order Invoiced  Posted  Yes  The work order has been invoiced, and the billing process is complete. 
Customer Cancelled  Cancelled  Yes  The work order has been cancelled by the customer. 
Resolved Remotely  Cancelled  No  The work order has been cancelled because the issue was able to be resolved remotely by customer service. 

Conclusion

While these are examples of Substatuses which may be valuable, each organization can create their own, set the defaults that make sense, and map them to relevant System Status values. 

Next up in the blog series –

Part 2 – Booking Status Concepts: Booking Status and Field Service Status
Part 3 – Booking status impact on work orders across single and multi-booking scenarios 

The post Status-Driven Success: Managing Your Work Order Lifecycle through Statuses – Part 1   appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.