This article is contributed. See the original author and article here.
This post is co-authored by John Ryan, Manager Functional Architect Dynamics 365 Field Service, Avanade.
One of the most exciting things about the introduction of AI into tools people use every day to do their jobs is the way AI can help revolutionize the way people work. Especially at the frontlines of business, AI provides organizations with innovative and personalized ways to serve customers. According to IDC, 28% of organizations are investing significantly in generative AI.1 This is what’s exciting about the introduction of Copilot in Microsoft Dynamics 365 Field Service.
No doubt about it: modern solutions like Microsoft Dynamics 365 Field Service have already come a long way in helping frontline workers be more productive and efficient in helping customers. But Copilot takes things to the next level by bringing the power of next-generation AI to the frontlines, enabling faster resolution and better service.
Streamline Field Service operations with Copilot
Copilot provides a leap forward in the field service space.
Enabling next-level support with Copilot for Field Service in Outlook and Microsoft Teams
Email has long been a critical communications tool for frontline managers and technicians. New data from Microsoft’s 2023 Work Trend Index Annual Report reveals that over 60% of frontline workers struggle with having to do repetitive or menial tasks that take time away from more meaningful work.2 Now, the Copilot in Dynamics 365 Field Service Outlook add-in can streamline work order creation with relevant details pre-populated from emails.
So, what does that mean, exactly? Copilot can also optimize technician scheduling with data-driven recommendations based on factors such as travel time, availability, and skillset. Frontline managers can see relevant work orders and review them before creating new work orders, and they can easily reschedule or update those work orders as customers’ needs change. In addition, organizations can customize work orders for their frontline needs by adding, renaming, or rearranging fields. Even better, Copilot can assist frontline managers with work order scheduling in Microsoft Teams, saving time and effort to find the right worker for the job.
Frontline managers can also easily open the Field Service desktop app directly from the Copilot add-in via Outlook or Teams to view work orders. There, they can see booking suggestions in the work order and book a field technician without opening the schedule board. The booking is created in Microsoft Dataverse and also gets recorded on the Field Service schedule board automatically. All this saves frontline managers valuable time because they can stay in the flow of work, reduce clicks and context-switching between apps, and create work orders quickly without copy/paste errors. In the Field Service app, they can also review work order list views and edit a work order right in the list without having to reopen it.
Getting answers faster with natural language search with Copilot in Teams
Searching work orders to find specific details about customer jobs or looking for information about parts inventory used to mean switching between apps and searching across different sources for information. Now, to search for work orders or other customer data, agents can ask Copilot through a Teams search. They simply ask what they’re looking for using natural language, and Copilot will return specific information related to their work orders in Dynamics 365 Field Service including status updates, parts needed, or instructions to help them complete the job. The more agents use Copilot, the more the AI assistant learns and can assist agents at their jobs. The future is now.
Empowering field technicians with modern user experience
Frontline managers aren’t the only team members getting a productivity boost from more modern tools. The new Dynamics 365 Field Service mobile experience, currently in preview for Windows 10 and higher, iOS, and Android devices, empowers field technicians by giving them all the relevant, most up-to-date information they need to manage work orders, tasks, services, and products and get their jobs done thoroughly and efficiently. This modern user experience supports familiar mobile navigation, gestures, and controls to streamline managing work order Tasks, Services, and Products. Technicians can save valuable time by quickly updating the status of a booking, getting driving directions to a customer site, and changing or completing work order details. They can even get detailed information about tasks with embedded Microsoft Dynamics 365 Guides, which provide step-by-step instructions, pictures, and videos.
Changing the game for frontline technicians with Copilot in mobile
For field service technicians, having Copilot generate work order summaries that include concise, detailed descriptions of services as well as pricing and costs is a game changer. Work order summaries are generated by Copilot on the fly, synthesizing information from various tabs and fields to break down tasks, parts, services, and problem descriptions into a simple narrative, making it easy for technicians to understand job requirements. And because field technicians often need to work with their hands, they can use the voice-to-text feature to update work orders by describing details including exactly what they did on a job, when they started and finished, and what parts they used. When the work is completed, they can use the app to collect a digital signature from the customer or use voice-to-text to capture customer feedback.
Learn more about the AI-powered experiences in Dynamics 365 Field Service, Teams, and Microsoft’s mixed reality applications for your frontline workforce announced at Microsoft Ignite 2023:
[1] IDC Analyst Brief sponsored by Microsoft, Generative AI and Mixed Reality Power the Future of Field Service Resolution (Doc #US51300223), October 2023
[2] The Work Trend Index survey was conducted by an independent research firm, Edelman Data x Intelligence, among 31,000 full-time employed or self-employed workers across 31 markets, 6,019 of which are frontline workers, between February 1, 2023, and March 14, 2023. This survey was 20 minutes in length and conducted online, in either the English language or translated into a local language across markets. One thousand full-time workers were surveyed in each market, and global results have been aggregated across all responses to provide an average. Each market is evenly weighted within the global average. Each market was sampled to be representative of the full-time workforce across age, gender, and region; each sample included a mix of work environments (in-person, remote vs. non-remote, office settings vs. non-office settings, etc.), industries, company sizes, tenures, and job levels. Markets surveyed include: Argentina, Australia, Brazil, Canada, China, Colombia, Czech Republic, Finland, France, Germany, Hong Kong, India, Indonesia, Italy, Japan, Malaysia, Mexico, Netherlands, New Zealand, Philippines, Poland, Singapore, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, United Kingdom, United States, and Vietnam.
This article is contributed. See the original author and article here.
We are excited to announce that Personal Desktop Autoscale on Azure Virtual Desktop is generally available as of November 15, 2023! With this feature, organizations with personal host pools can optimize costs by shutting down or hibernating idle session hosts, while ensuring that session hosts can be started when needed.
Personal Desktop Autoscale
Personal Desktop Autoscale is Azure Virtual Desktop’s native scaling solution that automatically starts session host virtual machines according to schedule or using Start VM on Connect and then deallocates or hibernates (in preview) session host virtual machines based on the user session state (log off/disconnect).
The following capabilities are now generally available with Personal Desktop Autoscale:
Scaling plan configuration data can be stored in all regions where Azure Virtual Desktop host pool objects are, including Australia East, Canada Central, Canada East, Central US, East US, East US 2, Japan East, North Central US, North Europe, South Central US, UK South, UK West, West Central US, West Europe, West US, West US 2, and West US 3. It needs to be stored in the same region as the host pool objects it will be assigned to, however, we support deploying session host virtual machines in all Azure regions.
You can use the Azure portal, REST API, PowerShell to enable and manage Personal Desktop Autoscale.
The following capabilities are new in public preview with Personal Desktop Autoscale:
Hibernation is available as a scaling action. With the Hibernate-Resume feature in public preview, you will have a better experience as session state persists when the virtual machine hibernates. As a result, when the session host virtual machine starts, the user will be able to quickly resume where they left off. More details of the Hibernate-Resume feature can be found here.
Getting started
To enable Personal Desktop Autoscale, you need to:
Create a personal scaling plan.
Define whether to enable or disable Start VM on Connect.
Choose what action to perform after a user session has been disconnected or logged off for a configurable period of time.
Assign a personal scaling plan to one or more personal host pools.
A screenshot of a scaling plan in Azure Virtual Desktop called “fullweek_schedule”. The ramp-down is shown as repeating every day of the week at 6:00 PM Beijing time, starting VM on Connect. Disconnect settings are set to hibernate at 30 minutes. Log off settings are set to shut down after 30 minutes.
If you want to use Personal Desktop Autoscale with the Hibernate-Resume option, you will need to self-register your subscription and enable Hibernate-Resume when creating VMs for your personal host pool. We recommend you create a new host pool of session hosts and virtual machines that are all enabled with Hibernate-Resume for simplicity. Hibernation can also work with Start VM on Connect for cost optimization.
You can set up diagnostics to monitor potential issues and fix them before they interfere with your Personal Desktop Autoscale scaling plan.
This article is contributed. See the original author and article here.
Azure AI Health Insights: New built-in models for patient-friendly and radiology insights
Azure AI Health Insights is an Azure AI service with built-in models that enable healthcare organizations to find relevant trials, surface cancer attributes, generate summaries, analyze patient data, and extract information from medical images.
Earlier this year, we introduced two new built-in models available for preview. These built-in models handle patient data in different modalities, perform analysis on the data, and provide insights in the form of inferences supported by evidence from the data or other sources.
The following models are available for preview:
Patient-friendly reports model* This model simplifies medical reports and creates a patient-friendly simplified version of clinical notes while retaining the meaning of the original clinical information. This way, patients can easily consume their clinical notes in everyday language. Patient-friendly reports model is available in preview.
Radiology insights model* This model uses radiology reports to surface relevant radiology insights that can help radiologists improve their workflow and provide better care. Radiology insights model is available in preview.
Simplify clinical reports
Patient-friendly reports is an AI model that provides an easy-to-read version of a patient’s clinical report. The simplified report explains or rephrases diagnoses, symptoms, anatomies, procedures, and other medical terms while retaining accuracy. The text is reformatted and presented in plain language to increase readability. The model simplifies any medical report, for example a radiology report, operative report, discharge summary, or consultation report.
The Patient-friendly reports model uses a hybrid approach that combines GPT models, healthcare-specialized Natural Language Processing (NLP) models, and rule-based methods. Patient-friendly reports also uses text alignment methods to allow mapping of sentences from the original report to the simplified report to make it easy to understand.
The system uses scenario-specific guardrails to detect hallucinations, omissions, and any other ungrounded content and does several steps to ensure the full information from the original clinical report is kept and no new additional information is added.
The Patient-friendly reports model helps healthcare professionals and patients consume medical information in a variety of scenarios. For example, Patient-friendly reports model saves clinicians the time and effort of explaining a report. A simplified version of a clinical report is generated by Patient-Friendly reports and shared with the patient, side by side with the original report. The patient can review the simplified version to better understand the original report, and to avoid unnecessary communication with the clinician to help with interpretation. The simplified version is marked clearly as text that was generated automatically by AI, and as text that must be used together with the original clinical note (which is always the source of truth).
Figure 1 Example of a simplified report created by the patient-friendly reports model
Improve the quality of radiology findings and flag follow-up recommendations
Radiology insights is a model that provides quality checks with feedback on errors and mismatches and ensures critical findings within the report are surfaced and presented using the full context of a radiology report. In addition, follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are flagged.
Radiology insights inferences, with reference to the provided input that can be used as evidence for deeper understanding of the conclusions of the model. The radiology insights model helps radiologists improve their reports and patient outcomes in a variety of scenarios. For example:
Surfaces possible mismatches. A radiologist can be provided with possible mismatches between what the radiologist documents in a radiology report and the information present in the metadata of the report. Mismatches can be identified for sex, age and body site laterality.
Highlights critical and actionable findings. Often, a radiologist is provided with possible clinical findings that need to be acted on in a timely fashion by other healthcare professionals. The model extracts these critical or actionable findings where communication is essential for quality care.
Flags follow-up recommendations. When a radiologist uncovers findings for which they recommend a follow up, the recommendation is extracted and normalized by the model for communication to a healthcare professional.
Extracts measurements from clinical findings. When a radiologist documents clinical findings with measurements, the model extracts clinically relevant information pertaining to the findings. The radiologist can then use this information to create a report on the outcomes as well as observations from the report.
Assists generate performance analytics for a radiology team. Based on extracted information, dashboards and retrospective analyses, Radiology insights provides updates on productivity and key quality metrics to guide improvement efforts, minimize errors, and improve report quality and consistency.
Figure2 Example of a finding with communication to a healthcare professional
Figure 3 Example of a radiology mismatch (sex) between metadata and content of a report with a follow-up recommendation
Get started today
Apply for the Early Access Program (EAP) for Azure AI Health Insights here.
After receiving confirmation of your entrance into the program, create and deployAzure AI Health Insightson Azure portal or from the command line.
Figure 4 Example of how to create an Azure Health Insights resource on Azure portal
After a successful deployment, you send POST requests with patient data and configuration as required by the model you would like to try and receive responses with inferences and evidence.
Do more with your data with Microsoft Cloud for Healthcare
With Azure AI Health Insights, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage protected health information (PHI) data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.
We look forward to working with you as you build the future of health.
Patient-friendly reports models and radiology insights model are capabilities provided “AS IS” and “WITH ALL FAULTS.” Patient-friendly reports and Radiology insights aren’t intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. These capabilities aren’t designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Patient-friendly reports model or Radiology insights model.
This article is contributed. See the original author and article here.
A few days ago, a customer asked us to find out details about the active connections of a connection pooling, how many connection poolings their application has, etc. In this article, I would like to share the lessons learned to see these details.
Once we executed our application we started seeing the following information:
2023-11-26 09:38:18.998: Actual active connections currently made to servers 0
2023-11-26 09:38:19.143: Active connections retrieved from the connection pool 0
2023-11-26 09:38:19.167: Number of connections not using connection pooling 0
2023-11-26 09:38:19.176: Number of connections managed by the connection pool 0
2023-11-26 09:38:19.181: Number of active unique connection strings 1
2023-11-26 09:38:19.234: Number of unique connection strings waiting for pruning 0
2023-11-26 09:38:19.236: Number of active connection pools 1
2023-11-26 09:38:19.239: Number of inactive connection pools 0
2023-11-26 09:38:19.242: Number of active connections 0
2023-11-26 09:38:19.245: Number of ready connections in the connection pool 0
2023-11-26 09:38:19.272: Number of connections currently waiting to be ready 0
As our application is using a single connection string and using a single connection pooler, the details that appear below are stable and understandable. But let’s make a couple of changes to the code to see how the numbers change.
Our first change will be to open 100 connections and once we reach those 100, we will close and reopen them to see how the counters fluctuat. The details we observe while our application is running indicate that connections are being opened but not closed. Which is expected.
2023-11-26 09:49:01.606: Actual active connections currently made to servers 13
2023-11-26 09:49:01.606: Active connections retrieved from the connection pool 13
2023-11-26 09:49:01.607: Number of connections not using connection pooling 0
2023-11-26 09:49:01.607: Number of connections managed by the connection pool 13
2023-11-26 09:49:01.608: Number of active unique connection strings 1
2023-11-26 09:49:01.608: Number of unique connection strings waiting for pruning 0
2023-11-26 09:49:01.609: Number of active connection pools 1
2023-11-26 09:49:01.609: Number of inactive connection pools 0
2023-11-26 09:49:01.610: Number of active connections 13
2023-11-26 09:49:01.610: Number of ready connections in the connection pool 0
2023-11-26 09:49:01.611: Number of connections currently waiting to be ready 0
But as we keep closing and opening new ones, we start to see how our connection pooling is functioning
2023-11-26 09:50:08.600: Actual active connections currently made to servers 58
2023-11-26 09:50:08.601: Active connections retrieved from the connection pool 50
2023-11-26 09:50:08.601: Number of connections not using connection pooling 0
2023-11-26 09:50:08.602: Number of connections managed by the connection pool 58
2023-11-26 09:50:08.602: Number of active unique connection strings 1
2023-11-26 09:50:08.603: Number of unique connection strings waiting for pruning 0
2023-11-26 09:50:08.603: Number of active connection pools 1
2023-11-26 09:50:08.604: Number of inactive connection pools 0
2023-11-26 09:50:08.604: Number of active connections 50
2023-11-26 09:50:08.605: Number of ready connections in the connection pool 8
2023-11-26 09:50:08.605: Number of connections currently waiting to be ready 0
In the following example, we can see how once we have reached our 100 connections, the connection pooler is serving our application the necessary connection.
2023-11-26 09:53:27.602: Actual active connections currently made to servers 100
2023-11-26 09:53:27.602: Active connections retrieved from the connection pool 92
2023-11-26 09:53:27.603: Number of connections not using connection pooling 0
2023-11-26 09:53:27.603: Number of connections managed by the connection pool 100
2023-11-26 09:53:27.604: Number of active unique connection strings 1
2023-11-26 09:53:27.604: Number of unique connection strings waiting for pruning 0
2023-11-26 09:53:27.605: Number of active connection pools 1
2023-11-26 09:53:27.606: Number of inactive connection pools 0
2023-11-26 09:53:27.606: Number of active connections 92
2023-11-26 09:53:27.606: Number of ready connections in the connection pool 8
2023-11-26 09:53:27.607: Number of connections currently waiting to be ready 0
Let’s review the counters:
Actual active connections currently made to servers (100): This indicates the total number of active connections that have been established with the servers at the given timestamp. In this case, there are 100 active connections.
Active connections retrieved from the connection pool (92): This shows the number of connections that have been taken from the connection pool and are currently in use. Here, 92 out of the 100 active connections are being used from the pool.
Number of connections not using connection pooling (0): This counter shows how many connections are made directly, bypassing the connection pool. A value of 0 means all connections are utilizing the connection pooling mechanism.
Number of connections managed by the connection pool (100): This is the total number of connections, both active and idle, that are managed by the connection pool. In this example, there are 100 connections in the pool.
Number of active unique connection strings (1): This indicates the number of unique connection strings that are currently active. A value of 1 suggests that all connections are using the same connection string.
Number of unique connection strings waiting for pruning (0): This shows how many unique connection strings are inactive and are candidates for removal or pruning from the pool. A value of 0 indicates no pruning is needed.
Number of active connection pools (1): Represents the total number of active connection pools. In this case, there is just one connection pool being used.
Number of inactive connection pools (0): This counter displays the number of connection pools that are not currently in use. A value of 0 indicates that all connection pools are active.
Number of active connections (92): Similar to the second counter, this shows the number of connections currently in use from the pool, which is 92.
Number of ready connections in the connection pool (8): This indicates the number of connections that are in the pool, available, and ready to be used. Here, there are 8 connections ready for use.
Number of connections currently waiting to be ready (0): This shows the number of connections that are in the process of being prepared for use. A value of 0 suggests that there are no connections waiting to be made ready.
These counters provide a comprehensive view of how the connection pooling is performing, indicating the efficiency, usage patterns, and current state of the connections managed by the Microsoft.Data.SqlClient.
One thing, that pay my attention is Number of unique connection strings waiting for pruning This means that if there have been no recent accesses to the connection pooler, we might find that if there have been no connections for a certain period, the connection pooler will be eliminated, and the first connection that is made will take some time (seconds) to be recreated, for example, in the night when we might not have active workload:
Idle Connection Removal: Connections are removed from the pool after being idle for approximately 4-8 minutes, or if a severed connection with the server is detected.
Minimum Pool Size: If the Min Pool Size is not specified or set to zero in the connection string, the connections in the pool will be closed after a period of inactivity. However, if Min Pool Size is greater than zero, the connection pool is not destroyed until the AppDomain is unloaded and the process ends. This implies that as long as the minimum pool size is maintained, the pool itself remains active.
We could observe in Microsoft.Data.SqlClient in the file SqlClient-mainSqlClient-mainsrcMicrosoft.Data.SqlClientsrcMicrosoftDataProviderBaseDbConnectionPoolGroup.cs useful information about it:
Line 50: private const int PoolGroupStateDisabled = 4; // factory pool entry pruning method Line 268: // Empty pool during pruning indicates zero or low activity, but Line 293: // must be pruning thread to change state and no connections Line 294: // otherwise pruning thread risks making entry disabled soon after user calls ClearPool
These parameters work together to manage the lifecycle of connection pools and their resources efficiently, balancing the need for ready connections with system resource optimization. The actual removal of an entire connection pool (and its associated resources) depends on these settings and the application’s runtime behavior. The documentation does not specify a fixed interval for the complete removal of an entire connection pool, as it is contingent on these dynamic factors.
To conclude this article, I would like to conduct a test to see if each time I request a connection and change something in the connection string, it creates a new connection pooling.
For this, I have modified the code so that half of the connections receive a clearpool. As we could see new inactive connection pools shows.
2023-11-26 10:34:18.564: Actual active connections currently made to servers 16
2023-11-26 10:34:18.565: Active connections retrieved from the connection pool 11
2023-11-26 10:34:18.566: Number of connections not using connection pooling 0
2023-11-26 10:34:18.566: Number of connections managed by the connection pool 16
2023-11-26 10:34:18.567: Number of active unique connection strings 99
2023-11-26 10:34:18.567: Number of unique connection strings waiting for pruning 0
2023-11-26 10:34:18.568: Number of active connection pools 55
2023-11-26 10:34:18.568: Number of inactive connection pools 150
2023-11-26 10:34:18.569: Number of active connections 11
2023-11-26 10:34:18.569: Number of ready connections in the connection pool 5
2023-11-26 10:34:18.570: Number of connections currently waiting to be ready 0
Source code
using System;
using Microsoft.Data.SqlClient;
using System.Threading;
using System.IO;
using System.Diagnostics;
namespace HealthCheck
{
class ClsCheck
{
const string LogFolder = "c:tempMydata";
const string LogFilePath = LogFolder + "logCheck.log";
public void Main(Boolean bSingle=true, Boolean bDifferentConnectionString=false)
{
int lMaxConn = 100;
int lMinConn = 0;
if(bSingle)
{
lMaxConn = 1;
lMinConn = 1;
}
string connectionString = "Server=tcp:servername.database.windows.net,1433;User Id=username@microsoft.com;Password=Pwd!;Initial Catalog=test;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=5;Pooling=true;Max Pool size=" + lMaxConn.ToString() + ";Min Pool Size=" + lMinConn.ToString() + ";ConnectRetryCount=3;ConnectRetryInterval=10;Authentication=Active Directory Password;PoolBlockingPeriod=NeverBlock;Connection Lifetime=5;Application Name=ConnTest";
Stopwatch stopWatch = new Stopwatch();
SqlConnection[] oConnection = new SqlConnection[lMaxConn];
int lActivePool = -1;
string sConnectionStringDummy = connectionString;
DeleteDirectoryIfExists(LogFolder);
ClsEvents.EventCounterListener oClsEvents = new ClsEvents.EventCounterListener();
//ClsEvents.SqlClientListener olistener = new ClsEvents.SqlClientListener();
while (true)
{
if (bSingle)
{
lActivePool = 0;
sConnectionStringDummy = connectionString;
}
else
{
lActivePool++;
if (lActivePool == (lMaxConn-1))
{
lActivePool = 0;
for (int i = 0; i = 5)
{
Log($"Maximum number of retries reached. Error: " + ex.Message);
break;
}
Log($"Error connecting to the database. Retrying in " + retries + " seconds...");
Thread.Sleep(retries * 1000);
}
}
return connection;
}
static void Log(string message)
{
var ahora = DateTime.Now;
string logMessage = $"{ahora.ToString("yyyy-MM-dd HH:mm:ss.fff")}: {message}";
//Console.WriteLine(logMessage);
try
{
using (FileStream stream = new FileStream(LogFilePath, FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
{
using (StreamWriter writer = new StreamWriter(stream))
{
writer.WriteLine(logMessage);
}
}
}
catch (IOException ex)
{
Console.WriteLine($"Error writing in the log file: {ex.Message}");
}
}
static void ExecuteQuery(SqlConnection connection)
{
int retries = 0;
while (true)
{
try
{
using (SqlCommand command = new SqlCommand("SELECT 1", connection))
{
command.CommandTimeout = 5;
object result = command.ExecuteScalar();
}
break;
}
catch (Exception ex)
{
retries++;
if (retries >= 5)
{
Log($"Maximum number of retries reached. Error: " + ex.Message);
break;
}
Log($"Error executing the query. Retrying in " + retries + " seconds...");
Thread.Sleep(retries * 1000);
}
}
}
static void LogExecutionTime(Stopwatch stopWatch, string action)
{
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Log($"{action} - {elapsedTime}");
stopWatch.Reset();
}
public static void DeleteDirectoryIfExists(string path)
{
try
{
if (Directory.Exists(path))
{
Directory.Delete(path, true);
}
Directory.CreateDirectory(path);
}
catch (Exception ex)
{
Console.WriteLine($"Error deleting the folder: {ex.Message}");
}
}
}
}
This article is contributed. See the original author and article here.
Introduction
APractical Guide for Beginners: Azure OpenAI with JavaScript and TypeScriptis an essential starting point for exploring Artificial Intelligence in the Azure cloud. This guide will be divided into 3 parts, covering: ‘How to create the Azure OpenAI Service resource,’ How to implement the model created in Azure OpenAI Studio, and finally, how to consume this resource in a Node.js/TypeScript application. This series will help you learn the fundamentals so that you can start developing your applications with Azure OpenAI Service. Whether you are a beginner or an experienced developer, discover how to create intelligent applications and unlock the potential of AI with ease.
Responsible AI
Before we start discussing Azure OpenAI Service, it’s crucial to talk about Microsoft’s strong commitment to the entire field of Artificial Intelligence. Microsoft is deeply dedicated to this topic. Therefore, Microsoft is committed to ensuring that AI is used in a responsible and ethical manner. Furthermore, Microsoft is working with the AI community to develop and share best practices and tools to help ensure that AI is used in a responsible and ethical way, thereby incorporating the six core principles, which are:
Fairness
Inclusivity
Reliability and Safety
Transparency
Security and Privacy
Accountability
If you want to learn more about Microsoft’s commitment to Responsible AI, you can access the linkMicrosoft AI Principles.
Now, we can proceed with the article!
Understand Azure OpenAI Service
Azure OpenAI Service provides access to advanced OpenAI language models such asGPT-4,GPT-3.5-Turbo, andEmbeddingsvia a REST API. The GPT-4 and GPT-3.5-Turbo models are now available for general use, allowing adaptation for tasks such as content generation, summarization, semantic search, and natural language translation to code. Users can access the service through REST APIs, Python SDK, or Azure OpenAI Studio.
To learn more about the models available in Azure OpenAI Service, you can access them through the linkAzure OpenAI Service models.
Create the Azure OpenAI Service Resource
The use of Azure OpenAI Service is limited. Therefore, it is necessary to request access to the service atAzure OpenAI Service. Once you have approval, you can start using and testing the service!
Once your access is approved, go to theAzure Portaland let’s create the Azure OpenAI resource. To do this, follow the steps below:
Step 01:Click on theCreate a resourcebutton.
Step 02:In the search box, typeAzure OpenAIand then clickCreate.
Step 03:On the resource creation screen, fill in the fields as follows:
Note that in thePricing tierfield, you can test Azure OpenAI Service for free but with some limitations. To access all features, you should choose a paid plan. For more pricing information, access the linkAzure OpenAI Service pricing.
Step 04:Under theNetworktab, choose the option:All networks, including the internet, can access this resource.and then clickNext.
Step 05:After completing all the steps, click theCreatebutton to create the resource.
Step 06:Wait a few minutes for the resource to be created.
Next steps
In the next article, we will learn how to deploy a model on the Azure OpenAI Service. This model will allow us to consume the Azure OpenAI Service directly in our code.
Oh, I almost forgot to mention! Don’t forget to subscribe to myYouTube Channel! In 2023/2024, there will be many exciting new things on the channel!
Some of the upcoming content includes:
Microsoft Learn Live Sessions
Weekly Tutorials on Node.js, TypeScript, & JavaScript
And much more!
If you enjoy this kind of content, be sure to subscribe and hit the notification bell to be notified when new videos are released. We already have an amazing new series coming up on the YouTube channel this week.
“This article introduces various DB technologies ranging from the latest version of SQL Server, Azure SQL, Business Intelligence, to Machine Learning.”
(In Korean: SQL Server 최신 버전, Azure SQL, Business Intelligence, Machine Learning에 이르기까지 다양한 DB기술을 소개합니다.)
*Relevant Activity: I have been writing a series of articles about new features of SQL Server 2022 on my website: 김정선의 Data 이야기 (visualdb.net)
“Databases in a company are the place where information is stored to drive the company’s production processes. Tera of data, dozens of databases, millions of rows, the entire activity depends on this, and information security can no longer be an option, we have to think about security by design and security by default.”
“If you work with SQL Server, it’s really helpful to understand advanced information about how indexing works, memory management, etc. You’ll find tips about creating better indexes & queries and how to debug issues better.”
“This is a practical course that enables you to understand the power platform comprehensively and organically. Through the practical course, you can learn each element systematically while covering various practical aspects, making it a very suitable course for beginners and intermediate learners. I recommend this course.”
(In Korean: 파워플랫폼에 대한 전체적인 이해를 종합적이고 유기적으로 가능하게 하는 실습과정입니다. 실습과정을 통해서 실무적인 요소를 두루 다루면서도 체계적으로 하나하나 익힐 수 있어서 초보자와 중급자 모두에게 매우 적절한 실습과정이어서 추천했습니다.)
The Microsoft Ignite 2023 conference has been a showcase of ground breaking AI advancements, offering a wealth of opportunities for the enhancement of ERP (Enterprise Resource Planning) solutions. These technological strides are more than just innovations; they are pathways to augment the functionality, efficiency, and user experience of ERP systems.
Image: Anupam Sharma & Rupa Mantravadi showcasing Copilot Scenarios in Dynamics 365 AI ERP Applications
Top 10 Insights and Opportunities for ERP Applications:
This blog explores how the top 10 key insights from Ignite 2023 can be leveraged to enhance ERP products, benefit their users, assist ISVs (Independent Software Vendors) in developing custom solutions, and support the core business users of these ERP systems.
1. Revolutionizing Productivity with Copilot in Microsoft 365:
Microsoft 365 Copilot:
This innovation is a testament to Microsoft’s commitment to enhancing workplace efficiency and creativity. Its integration across applications like Outlook, Excel, and Teams exemplifies Microsoft’s vision for a seamlessly interconnected work environment.
Opportunity for ERP:
Incorporating ERP-focused Copilot plugins into M365 applications can revolutionize ERP workflows through the automation of routine tasks and the provision of intelligent insights from the ERP system for decision-making within the productivity suite. Such integration is poised to significantly boost time efficiency and improve accuracy for ERP system users.
2. Extending Copilot’s Reach:
Diverse Integration:
Expanding Copilot to aid in the completion of tasks across diverse roles and functions, such as business processes and IT administration represents a strategic initiative to integrate AI thoroughly into all facets of work.
Opportunity:
Expanding Copilot into ERP systems can provide tailored assistance in various ERP areas like financial planning, supply chain management, operations, human resources, commerce etc. offering a more intuitive and guided user experience.
3. Strategic Enhancements:
Bing Chat Enterprise Transition:
The transformation of Bing Chat into Copilot (copilot.microsoft.com) offers an opportunity for ERP systems to utilize enhanced external signals for improved demand planning, supply chain risk management, and support functions.
Copilot Studio:
This tool can be utilized by ISVs and ERP developers, enabling them to craft tailored AI solutions that integrate flawlessly with core ERP systems, thereby improving functionality and user experience. It also offers administrative capabilities to refine core ERP Copilot skills with additional grounding, topics, and more.
4. Data and AI Synergy with Microsoft Fabric:
Unified Data Handling:
Integrating Microsoft Fabric with AI tools and making this GA (Generally Available) is a significant step towards enhancing data-driven decision-making.
Opportunity:
Microsoft Fabric’s integration can enhance data-driven decision-making in ERP systems. It can unify data from various sources, providing a more comprehensive view for analytics and reporting within ERP systems.
5. Advancements in Azure AI platform:
Model-as-a-Service:
Simplifies the integration and customization of AI models, marking a significant advancement in AI application development.
New AI Models:
Introduction of GPT-3.5 Turbo, GPT-4 Turbo, and DALL·E 3 revolutionizes AI application development.
Opportunity:
The integration of advanced AI models like GPT-3.5 Turbo and GPT-4 Turbo into ERP systems can enable more sophisticated data analysis using tools like advance data analytics, code interpreter and predictive modeling, aiding in strategic business planning and forecasting.
6. Enhanced Cloud Infrastructure and NVIDIA Collaboration:
Azure Maia and Cobalt Chips & Azure Boost:
These advancements supercharge AI workloads and improve storage and networking, enhancing ERP system efficiency and scalability.
NVIDIA AI Foundry Service:
This partnership boosts AI model development, leveraging NVIDIA’s tools with Azure’s infrastructure.
Opportunity:
These innovations enable faster processing and robust AI capabilities in ERP solutions, facilitating advanced analytics and decision-making.
7. Ethical AI Deployment:
Responsible AI Initiatives:
Emphasizing ethical use with initiatives like the Copilot Copyright Commitment and Azure AI Content Safety.
Opportunity:
The focus on responsible AI use ensures that ERP solutions remain compliant with legal standards and ethical guidelines, building trust among users and stakeholders.
8. AI Integration in Windows Experiences:
Windows 11 AI Tools:
Aiming to make AI more accessible and position Windows as the prime platform for AI development.
Opportunity:
Enhanced AI tools in Windows can improve the accessibility and usability of ERP solutions, offering a more seamless and integrated experience across devices and platforms.
9. Enhanced AI-Driven Security Solutions:
Microsoft Sentinel and Microsoft Defender XDR Integration:
Creates a unified security operations platform, enhancing threat protection.
Opportunity:
The integration of advanced security solutions can bolster the security of ERP systems, protecting sensitive business data and ensuring compliance with industry standards.
10. AI Skill Development and Credentials:
Microsoft Applied Skills Credentials:
Covering various aspects of AI, these credentials are crucial for validating expertise in this rapidly evolving field.
Opportunity:
Providing training and credentials in AI can empower ERP professionals to leverage AI capabilities effectively within Dynamics 365, enhancing the overall value and utility of ERP solutions.
Image:This year’s theme highlights Microsoft’s full embrace of its identity as the “Copilots company.”
Microsoft Ignite 2023 has opened a new chapter in the evolution of ERP applications, with AI at its core. By embracing these AI advancements, ERP solutions can be significantly enhanced, redefining how businesses leverage these systems for strategic and operational excellence.
Learn More
Interested in learning more about Copilot’s in-app help guidance? Here are your next steps:
Read the Copilot Product Documentation:
For comprehensive and detailed information about Copilot’s capabilities and functionalities, be sure to check out our product documentation. You’ll find in-depth insights into how Copilot can enhance your experience with Dynamics365 Supply Chain Management –
Read the Responsible AI FAQ for Copilot and its capability of generative help and guidance.
If you’re already using Dynamics 365 Supply Chain Management, you can enable and experience Copilot’s capabilities to streamline your operations. Here’s how:
Step 1: Enable Copilot Feature: Follow our documentation for existing customers to learn how to enable this feature. Once enabled, you’ll have access to Copilot’s powerful in-app help guidance within Dynamics 365 Supply Chain Management.
Step 2: Access Copilot – Locate the Copilot icon at the top of your screen within Dynamics 365 Supply Chain Management, then click on it to open the conversational sidecar experience. Copilot will introduce itself and encourage you to ask questions.
Step 3: Pose Your Question – In any uncommon or challenging task within the application, simply ask Copilot for guidance. For better results, especially when seeking documentation-related in-app help, consider starting your questions with ‘How.’
Step 4: Instant Guidance – Copilot will provide you with step-by-step guidance; all responses are grounded by our public documentation.
Please note that Copilot’s capabilities are exclusively available to existing Dynamics 365 Supply Chain Management customers. If you’re one of them, don’t miss out on the opportunity to enhance your user experience and streamline your operations with Copilot.
Join Copilot for finance and operations apps – Yammer
Stay informed about the most recent Copilot by becoming a member of our Copilot for finance and operations Yammer Group . Share your Feedback and be the first to know about the latest enhancements.
This article is contributed. See the original author and article here.
Introduction:
In the ever-changing landscape of contemporary business, effective supply chain management is paramount. Central to this complex system is the critical process of demand planning and forecasting, which profoundly impacts a company’s capacity to fulfil customer requirements, optimize inventory, and maintain a competitive edge. This article explores the significance of demand forecasting and how Microsoft seamlessly incorporates it into its new demand planning in Dynamics 365 Supply Chain Management.
The Crucial Role of Demand Planning and Forecasting:
Precise demand planning and forecasting play a pivotal role in diverse facets of business operations, yielding advantages throughout the entire supply chain:
Accurate Inventory Management: Precise demand forecasting optimizes inventory levels, finding the balance between excess stock and stockouts.
Customer Satisfaction: Accurate forecasting ensures products are available in the right place at the right time, enhancing customer satisfaction and brand loyalty.
Cost Reduction: Effective demand forecasting minimizes holding costs for excess inventory and reduces costs associated with stockouts.
Resource Allocation: Anticipating demand aids in efficiently allocating resources, including labor, production capacity, and raw materials.
Improved Collaboration: Accurate forecasting fosters collaboration across the supply chain, enhancing coordination between suppliers, manufacturers, and retailers.
Long-term Planning: Strategic planning benefits from accurate forecasting, aiding businesses in adapting to changing market conditions and trends.
Competitive Advantage: Organizations responding effectively to market changes gain a competitive edge.
Image: Snapshot of demand planning in Dynamics Supply Chain Management
Exploring a Range of Demand Forecasting Models:
Below is a concise examination of diverse forecasting models employed in general Supply Chain demand planning, accompanied by insights into their potential applications within a business context.
Qualitative Forecasting:
Delphi Method: In the tech industry, when forecasting the demand for a new product, experts from different fields like engineering, marketing, and finance may anonymously provide input through multiple rounds of questionnaires.
Market Research: A smartphone company might conduct extensive market research, including customer surveys and competitor analysis, to forecast demand for its next flagship device.
Time Series Analysis:
Prophet: In the retail sector, employ Prophet time series analysis for precise demand forecasting, enabling efficient inventory management, minimizing stockouts, and maximizing profitability.
Exponential Smoothing: A fashion retailer might use exponential smoothing to predict future sales based on the recent trend in sales for specific clothing items.
ARIMA: A beverage company could apply ARIMA models to forecast demand for a seasonal drinks during holidays.
Best Fit: In healthcare, ‘Best Fit’ involves selecting the optimum statistical model to capture dataset patterns. Applying best-fit time series analysis anticipates patient needs, optimizes resource management, and enhances operational effectiveness for improved outcomes and satisfaction.”
ETS (Error-Trend-Seasonality): Hospitality and Tourism, utilize ETS models to forecast hotel bookings and tourism trends, enabling businesses to optimize pricing strategies and allocate resources efficiently based on seasonal fluctuations.
Causal Models:
Regression Analysis: An automobile manufacturer might use regression analysis to predict the impact of advertising spending on the demand for a particular car model.
Econometric Models: In the energy sector, econometric models could be employed to forecast electricity demand based on economic indicators such as GDP and population growth.
Machine Learning Models:
Neural Networks: An e-commerce platform might use neural networks to predict customer demand for various products by considering factors like browsing history, purchase patterns, and customer demographics.
Random Forest: A retail chain could employ a random forest model to forecast the demand for different product categories based on historical sales data and external factors like economic trends.
Judgmental Forecasting:
Expert Input: In the pharmaceutical industry, stakeholders may provide expert opinions to forecast the demand for a new medication, considering factors like regulatory approval, market acceptance, and healthcare trends.
These forecasting models offer diverse approaches, catering to various industries and scenarios.
Demand forecasting in Dynamics 365 Supply Chain Management
In the context of demand planning in Dynamics 365 Supply Chain Management, the efficiency of demand forecasting is bolstered by a robust suite of features. The system taps into the potential of built-in out-of-the-box AI-powered algorithms, integrates existing models, utilizes the capabilities of Custom Azure Machine Learning (AML), advanced forecasting models, forecasting profiles, and streamlined data hierarchy management to enhance the precision of demand planning. Through the seamless integration of these elements, the platform guarantees the accuracy of demand forecasts, facilitates the smooth identification and handling of outliers, and adeptly oversees various facets of the supply chain.
Built-in AI Algorithms:
Access to pre-configured and out-of-the-box AI-based models like Arima Prophet, ETS, and outlier removal without the need for additional configuration provides users with a powerful and user-friendly forecasting solution. This feature enables businesses to quickly leverage advanced forecasting techniques, enhancing the accuracy of predictions.
Image: Prophet Forecast – Output
Advanced Forecasting Models:
The platform supports advanced forecasting methods such as ARIMA, ETS (Error-Trend-Seasonality), Prophet, and Best fit, ensuring a high degree of accuracy in demand forecasts. Leveraging the capabilities of custom Azure Machine Learning (AML), this flexibility empowers businesses to customize their forecasting strategies based on the distinct characteristics of their products and the dynamics of their market.
Image: Advance Forecasting Model selection
Use your own Forecast Models
If you’ve created your own forecasting models or utilize Azure Machine Learning within Dynamics 365 Supply Chain Management, you can leverage the collaborative editing features within the app to directly invoke your custom models. This allows you to seamlessly integrate your pre-built models alongside the out-of-the-box models provided, tailoring your forecast for optimal accuracy.
Forecasting Profiles:
Users can craft and manage forecasting profiles, streamlining calculations and facilitating the application of outlier removal techniques. This feature adds a layer of customization to the forecasting process, allowing organizations to adapt to specific business requirements and scenarios.
Outlier Detection and Removal/Handiling:
Detecting outliers in demand planning is a vital component of forecasting and supply chain management. The identification of outliers plays a key role in enhancing the precision of demand forecasts and ensuring the appropriate handling of anomalies. This process is streamlined through the intuitive Outlier Detection capabilities embedded in demand planning within Dynamics Supply Chain Management.Detecting and eliminating outliers is a pivotal stage in data preprocessing, essential for enhancing the accuracy and dependability of statistical analyses and machine learning models. Employ advanced methods, particularly the Interquartile Range (IQR) and Seasonal-Trend Decomposition using LOESS (STL), to skillfully pinpoint and preemptively manage outliers. This strategic approach contributes to improved precision and effectiveness in data analysis.
Outlier Removal/Handling Techniques:
Leverage sophisticated approaches, notably the Interquartile Range (IQR) and Seasonal-Trend Decomposition using LOESS (STL), to adeptly identify and proactively address outliers for enhanced precision and effectiveness in data analysis.
Image: Outlier Configuration – Interquartile Range (IQR)
Data Hierarchy Management:
Efficient data hierarchy management is crucial for granular forecasting and optimizing supply chain operations. Demand planning in Microsoft Dynamics 365 Supply Chain Management provides comprehensive hierarchy management across products, locations, and time:
Product Hierarchy: A retail electronics company organizes its product catalog with a hierarchical structure, starting broadly with categories like smartphones and laptops and becoming more granular with specific brands and models. This detailed hierarchy enables the company to forecast demand patterns at a specific product level, enhancing inventory management.
Image: Product Hierarchy set up
Location Hierarchy: The company optimizes distribution based on a location hierarchy that spans global, regional, and local levels. Understanding variations in demand by region helps in strategic decision-making for supply chain management, ensuring efficient allocation of inventory to meet the unique demands of each location.
Image: Location Hierarchy set up
Time Hierarchy: Capturing seasonality and trends through time hierarchy management, the company analyzes daily, weekly, monthly, and yearly variations in demand. This temporal sensitivity enhances the accuracy of predictions, allowing the business to adjust inventory levels and marketing strategies to meet fluctuating consumer preferences throughout the year.
Image: Time Hierarchy set up
Enhanced Integration with data:
Demand planning in Dynamics 365 Supply Chain Management seamlessly integrates your data, further enhancing the capabilities of the demand forecasting solution:
Image: Data Integration options
Virtual Entities:
The platform supports the extension or creation of custom entities within Microsoft Finance and Operations, enabling integration with additional data sources. This flexibility ensures that businesses can incorporate diverse datasets into their demand forecasting models.
Finance and OperationsIntegration:
The smooth export of planned data back into Microsoft Finance and Operations marks the culmination of the supply chain planning cycle. This closed-loop integration guarantees that the generated demand forecasts are directly incorporated into the broader finance and operations context, promoting consistency and accuracy throughout the organization. Users have the flexibility to plan across various instances of Dynamics 365 Finance and Operations at a higher level and selectively export specific parts of the plan to designated locations.
Image: Finance and Operations integration
Azure Data Explorer (ADX):
Efficient data storage, aggregation, and disaggregation in Azure Data Explorer optimize performance. This integration allows businesses to harness the power of Azure’s data capabilities for more robust forecasting and analytics.
Conclusion:
Microsoft Dynamics 365 Supply Chain Management’s demand planning feature provides a comprehensive solution for demand forecasting. This solution incorporates outlier detection, advanced forecasting models, seasonality analysis, and robust scenario planning capabilities. The integrated approach enables businesses to make well-informed decisions, adapt to changing conditions, and maintain a competitive edge in a dynamic market environment. Demand planning and forecasting are crucial aspects of effective supply chain management, enhancing agility, responsiveness, and competitiveness in today’s dynamic marketplace.
The article explores various forecasting models and demonstrates their real-world applications. Dynamics 365’s features, such as built-in AI algorithms, advanced forecasting models, forecasting profiles, outlier detection, and data hierarchy management, are highlighted as key elements that improve accuracy.
Additionally, the platform’s integration capabilities with Azure Data Explorer and Dynamics 365 Supply Chain Management are emphasized, creating a closed-loop system that optimizes forecasting accuracy and extends its impact across the organization.
To navigate the complexities of supply chain management, businesses can strategically leverage Dynamics 365 Supply Chain Management. This proactive approach allows them to adapt to future challenges, gain a competitive edge, and foster sustained success in the ever-evolving landscape of modern business.
Learn More
Access the demand planning solution, Documentation, and Workshop:
To access the demand planning Application and learn more about its features, follow the provided links. Additionally, don’t miss the upcoming demand planning workshop in Denmark to dive deeper into this transformative tool.
Access the demand planning in Dynamics 365 Supply Chain Management:
To install the latest version of the demand planning application, ensure that you are using Dynamics 365 Supply Chain Management, and then visit the Power Platform Admin Center. Search for the Dynamics 365 demand planning Application and follow the installation process.
If you’re interested in learning more about our demand planning Application, we’ve prepared an extensive collection of highly informative documents for your review and easy access.
We’ve taken extra steps to create a series of informative demo videos that will not only guide you through the application but also showcase a multitude of features available in the demand planning Application.
The forthcoming demand planning Workshop, to be held at Microsoft’s facility in Lyngby, Denmark, is geared towards introducing the new demand planning application to both Customers and Partners. Furthermore, it will provide an in-depth exploration of the product’s features and capabilities.
The workshop will also cover various important topics, including:
Exploring the contents of the Public Preview October 21st and December update.
Engaging hands-on lab activities.
Participating in a user experience (UX) study or exercise.
Understanding how these developments align with the broader context of Supply Chain Planning.
An overview of Copilot for demand planning.
Customer insights and feedback.
Please note that a more detailed agenda will be provided to attendees as the workshop date approaches.
Join our Yammer group to stay updated with monthly evaluations, scripts, and videos related to the demand Planning Application. Your journey to supply chain transformation begins here.
Filter out CreateFile events from the event grid subscription.
This filtering reduces the traffic coming from Event Grid and optimizes the ingestion of events into Azure Data Explorer.
You can read more about how to use the SDK correctly and avoid empty file errors here.
Schedule & plan
Step 1: Existing clusters which do not use the functionality today will get the change immediately.
Step 2: Clusters created after end of December 2023 will get the change.
Step 3: Current flow users as well as new clusters created until end of December 2023 will receive the changes after end of February 2024.
Deprecating the metric “Events Processed (for Event/IoT Hubs)”
This metric represents the total number of events read from Event Hubs/ IoT hub and processed by the cluster. These events can be split by the status: Received, Rejected, Processed.
Required Change
Users can use the metrics “Event received”, “Events processed” and “Event dropped” to get the number of events that were received, processed, or dropped from each data connection respectively.
This article is contributed. See the original author and article here.
Author(s): Arun Sethia is a Program manager in Azure HDInsight Customer Success Engineering (CSE) team.
Co-Author: Sairam is a Product manager for Azure HDInsight on AKS.
Introduction
Azure Logic Apps allows you to create and run automated workflows with little to no code. These workflows can be stateful or stateless. Each workflow starts with a single trigger, after which you must add one or more actions. An Action specifies a task to perform. Trigger specifies the condition for running any further steps in that workflow, for example when a blob is added or updated, when http request is received, checks for new data in an SQL database table, etc. These workflows can be stateful or stateless, based on your Azure Logic App plan (Standard and Consumption).
Using workflows, you can orchestrate complex workflow with multiple processing steps, triggers, and interdependencies. These steps can involve certain Apache Spark and Apache Flink jobs, and integration with Azure services.
The blog is focused on how you can add an action to trigger Apache Spark or Apache Flink job on HDInsight on AKS from a workflow.
Azure Logic App – Orchestrate Apache Spark Job on HDInsight on AKS
In our previous blog, we discussed about different options to submit Apache Spark jobs to HDInsight on AKS cluster. The Azure Logic Apps workflow will make use of Livy Batch Job API to submit Apache Spark job.
The following diagram shows interaction between Azure Logic Apps, Apache Spark cluster on HDInsight on AKS, Azure Active Directory and Azure Key Vault. You can always use the other cluster shapes like Apache Flink or Trino for the same, with the Azure management endpoints.
HDInsight on AKS allows you to access Apache Spark Livy REST APIs using OAuth token. It would require a Microsoft Entra service principal and Grant access to the cluster for the same service principal to the HDInsight on AKS cluster (RBAC support is coming soon). The client id (appId) and secret (password) of this principal can be stored in Azure Key Vault (you can use various design pattern’s to rotate secrets).
Based on your business scenario, you can start (trigger) your workflow; in this example we are using “Http request is received.” The workflow connects to Key Vault using System managed (or you can use User Managed identities) to retrieve secrets and client id for a service principal created to access HDInsight on AKS cluster. The workflow retrieves OAuth token using client credential (secret, client id, and scope as https://hilo.azurehdinsight.net/.default).
The final workflow is as follows, the source code and sample payload are available on this GitHub
Azure Logic App – Orchestrate Apache Flink Job on HDInsight on AKS
HDInsight on AKS provides user friendly ARM Rest APIs to submit and manage Flink jobs. Users can submit Apache Flink jobs from any Azure service using these Rest APIs. Using ARM REST API, you can orchestrate the data pipeline with Azure Data Factory Managed Airflow. Similarly, you can use Azure Logic Apps workflow to manage complex business workflow.
The following diagram shows interaction between Azure Logic Apps, Apache Flink cluster on HDInsight on AKS, Azure Active Directory and Azure Key Vault.
To invoke ARM REST APIs, we would require a Microsoft Entra service principal and configure its access to specific Apache Flink cluster on HDInsight on AKS with Contributor role. (resource id can be retrieved from the portal, go to cluster page, click on JSON view, value for “id” is resource id).
az ad sp create-for-rbac -n --role Contributor --scopes
The client id (appId) and secret (password) of this principal can be stored in Azure Key Vault (you can use various design pattern’s to rotate secrets).
The workflow connects to Key Vault using System managed (or you can use User Managed identities) to retrieve secrets and client id for a service principal created to access HDInsight on AKS cluster. The workflow retrieves OAuth token using client credential (secret, client id, and scope as https://management.azure.com/.default).
The final workflow is as follows, the source code and sample payload is available on GitHub
Summary
HDInsight on AKS REST APIs lets you automate, orchestrate, schedule and allows you to monitor workflows with your choice of framework. Such automation reduces complexity, reduces development cycles and completes tasks with fewer errors.
You can choose what works best for your organization, let us know your feedback or any other integration from Azure services to automate and orchestrate your workload on HDInsight on AKS.
Recent Comments