Defender EASM – Performing a Successful Proof of Concept (PoC)

Defender EASM – Performing a Successful Proof of Concept (PoC)

This article is contributed. See the original author and article here.

Welcome to an introduction of the concepts and simple approach required for executing a successful Proof of Concept (PoC) for Microsoft Defender External Attack Surface Management (Defender EASM). This article will serve as a high-level guide to help you execute a simple framework for evaluating Defender EASM, and other items to consider when embarking on the journey to understand the Internet exposed digital assets that comprise your external attack surface, so you can view risks through the same lens as a malicious threat actor.


 


Planning for the PoC 


To ensure success, the first step is planning. This entails understanding the value of Defender EASM, identifying stakeholders who need to be involved, and scheduling planning sessions to determine use cases & requirements and scope before beginning


For example, one of the core benefits of the Defender EASM solution is that it provides high value visibility to Security and IT (Information Technology) teams that enables them to: 



  • Identify previously unknown assets 

  • Prioritize risk 

  • Eliminate threats 

  • Extends vulnerability and exposure control beyond the firewall 


Next, you should identify all relevant stakeholders, or personas, and schedule in 1-2 short planning sessions to document the tasks and expected outcomes, or requirements. These sessions will establish the definition of success for the PoC.  


Who are the common stakeholders that should participate in the initial planning sessions? The answer to that question will be unique to each organization, but some common personas include the following: 



  • Vulnerability Management Teams 

  • IT personnel responsible for Configuration Management, Patching, Asset Inventory Databases 

  • Governance, Risk, & Compliance (GRC) Teams 



  • (Optional) GRC aligned Legal, Brand Protection, & Privacy Teams 

  • Internal Offensive Penetration Testing and Red Teams 

  • Security Operations Teams 

  • Incident Response Teams 

  • Cyber Threat Intelligence, Hunting, and Research Teams 


 


Use Cases & Requirements 


Based on the scope, you can begin collaborating with the correct people to establish use cases & requirements to meet the business goals for the PoC. The requirements should clearly define the subcomponents of the overarching business goals within the charter of your External Attack Surface Management Program. Examples of business goals and high-level supporting requirements might include: 



  • Discover Uknown Assets 



  • Find Shadow IT 



  • Discover Abandoned Assets 



  • Resulting from Mergers, Acquistions, or Divestitures 

  • Insufficient Asset Lifecycle Management in Dev/Test/QA Environments  



  • Identification of Vulnerabilities 



  • Lack of Patching or Configuration Management 



  • Assignment of Ownership to Assets 



  • Line of Business or Subsidiary 

  • Based on Geographic Location 

  • On-Prem vs Cloud 



  • Reporting, Automation, and Defender EASM Data Integrations 




  • Use of a reporting or visualization tool, such as PowerBI 

  • Logic Apps to automate management of elements of your attack surface 


 


Prerequisites to Exit the Planning Phase 



  • Completion of the Planning Phase! 

  • Configure an Azure Active Directory or personal Microsoft account. Login or create an account here. 

  • Set up a Free 30-day Defender EASM Trial 


– Visit the following link for information related to setting up your Defender EASM attack surface today for free. 



  • Deploy & Access the Defender EASM Platform 


– Login to Defender EASM 


– Follow the deployment Quick Start Guide 


 


Measuring Success?


Determining how success will establish the criteria for a successful or failed PoC. Success and Acceptance Criteria should be established for each requirement identified. Weights may be applied to requirements, but measuring success can be as simple as writing out criteria as below:


 


Requirement: Custom Reporting


Success Criteria: As a vulnerability manager, I want to view a daily report that shows the assets with CVSSv2 and CVSSv3 scores of 10.


Acceptance Criteria:



  • Data must be exported to Kusto

  • Data must contain assets & CVSS (Common Vulnerability Scoring System) scores

  • Dashboards must be created with PowerBI and accessible to user

  • Dashboard data must be updated daily


Validation: Run a test to validate that acceptance criteria has been met.


Pass / Fail: Pass


 


Executing the PoC 


 


Implementation and Technical Validation 


We will now look at five different use cases & requirements, define the success and acceptance criteria for each, and validate that the requirements are met by observing the outcome of each in Defender EASM. 


 


Use Case 1: Discover Unknown Assets, Finding Shadow IT 


 


Success Criteria: As a member of the Contoso GRC team, I want to identify Domain assets in our attack surface that have not been registered with the official company email address we use for domain registrations.


 


Acceptance Criteria: 



  • Defender EASM allows for searches of Domain WHOIS data that returns the “Registrant Email” field in the result set.  


Validation: 



  1. Click the “Inventory” link on the left of the main Defender EASM page. 


Michael_Lindsey_0-1701204783503.png


Figure: Launch the inventory query screen



  1. Execute a search in Defender EASM that excludes Domains registered with our official company email address of ‘domainadmin@constoso.com’ and returns all other Domains that have been registered with an email address that contains the email domain ‘contoso.com’.


Michael_Lindsey_3-1701205628727.png


Figure: Query for incorrectly registered Domain assets


 



  1. Click on one of the domains in the result set to view asset details. For example, “woodgrovebank.com” domain.

  2. When the asset details open and confirm that the domain ‘woodgrovebank.com’ is in the upper left corner.

  3. Click on the “Whois” tab.

  4. Note that this Domain asset has been registered with an email address that does not match the corporate standard (i.e., “employeeName@contoso.com”) and should be investigated for the existence of Shadow IT.


Michael_Lindsey_2-1701205587513.png


Figure: WHOIS asset details


Resources:



 


Use Case 2: Abandoned Assets, Acquisitions


 


Success Criteria: As a member of the Contoso Vulnerability Management team, who just acquired Woodgrove Bank, I want to ensure acquired web sites using the domain “woodgrovebank.com” are redirected to web sites using the domain “contoso.com”.  I need to obtain results of web sites that are not redirecting as expected, as those may be abandoned web sites.


 


Acceptance Criteria:



  • Defender EASM allows for search of specific initial and final HTTP (Hypertext Transfer Protocol) response codes for Page assets

  • Defender EASM allows for search of initial and final Uniform Resource Locator (URL) for Page assets


Validation:



  1. Run a search in Defender EASM that looks for Page assets that have:

    1. Initial response codes that cause HTTP redirects (i.e., “301”, “302”)

    2. Initial URLs that contain “woodgrovebank.com”

    3. Final HTTP response codes of “200”

    4. Final URL, post HTTP redirect, that do not contain “contso.com”




Michael_Lindsey_4-1701205855923.png


Figure: Query for incorrect page redirection


 



  1. Click one of the Page assets in the result set to see the asset details.


Michael_Lindsey_5-1701205913876.png


Figure: Page asset overview



  1. Validate:

    1.  Initial URL contains “woodgrovebank.com”

    2. Initial response code is either “301” or “301”

    3. Final URL does not contain “contoso.com”

    4. Final response code is “200”




 


Resources:



 


Use Case 3: Identification of Vulnerabilities, Lack of Patching or Configuration Management


 


Success Criteria: As a member of the Contoso Vulnerability Management team, I need the ability to retrieve a list of assets with high priority vulnerabilities and remediation guidance in my attack surface.


 


Acceptance Criteria:



  • Defender EASM provides a dashboard of prioritized risks in my external attack surface

  • Defender EASM provides remediation guidance for each prioritized vulnerability

  • Defender EASM provides an exportable list of assets impacted by vulnerability


 


Validation:



  1. From the main Defender EASM page, click “Attack Surface Summary” to view the “Attack Surface Summary” dashboard

  2. Click the link that indicates the number of assets impacted by a specific vulnerability to view a list of impacted assets


 


Michael_Lindsey_6-1701206045013.png


Figure: Attack Surface Insights Dashboard


 



  1. Validate that Defender EASM provides additional information about vulnerabilities and remediation guidance.

  2. Click the link in the upper right corner titled “Download CSV report” and validate the contents within


 


Michael_Lindsey_7-1701206105942.png


Figure: Vulnerability remediation details


 


Resources:



 


Use Case 4: Assignment of Ownership to Assets, Line of Business or Subsidiary


 


Success Criteria: As a member of the Contoso GRC team, I need the ability to assign ownership of assets to specific business units through, along with a mechanism to quickly visualize this relationship.


 


Acceptance Criteria:



  • Defender EASM provides an approach to assigning ownership via labels

  • Defender EASM allows users to apply labels to assets that meet specific indicators that indicate affiliation with a specific business unit

  • Defender EASM provides the ability to apply labels in bulk


 


Validation:



  1. Click the “Inventory” link on the left of the main Defender EASM page to launch the search screen

  2. Run a search that returns all Page assets that are on the IP Block “10.10.10.0/24”. The Page assets on this network all belong to the Financial Services line of business, so it is the only indicator of ownership needed in this example.


 


Michael_Lindsey_8-1701206174848.png


Figure: Query to determine Page asset ownership by IP Block


 



  1. Select all assets in the result set by clicking the arrow to the right of the checkbox as shown in the following image and choose the option for all assets.


Michael_Lindsey_9-1701206241726.png


Figure: Selecting assets for bulk modification


 



  1. Click the link to modify assets, followed by the link to “Create a new label” on the blade that appears.

  2. A new screen will appear that allows the creation of a label. Enter a descriptive “Label name”, an optional “Display name”, select a desired color, and click “Add” to finish creating a label.


Michael_Lindsey_10-1701206295535.png


Figure: Link to modify assets and create a label


 


Michael_Lindsey_11-1701206324191.png


Figure: Create label detail


 



  1. After creating the label, you will be directed back to the screen to modify assets. Validate that the label was created successfully.

  2. Click into the label text box to see a list of labels available to choose from and select the one that was just created.

  3. Click “Update”


Michael_Lindsey_12-1701206369370.png


Figure: Label selected assets


 



  1. Click the bell icon to view task notifications to validate the status of labels update.


Michael_Lindsey_13-1701206435096.png


Figure: View status of label update task


 



  1. When the task is complete, run the search again to validate that labels have been applied to the assets owned by the Financial Services organization.


Michael_Lindsey_14-1701206500913.png


Figure: Query to validate labels have been applied to assets


 


Resources:



 


Finishing the PoC


 


Summarize Your Findings


Identify how the Defender EASM solution has provided increased visibility to your organization’s attack surface in the PoC.



  1. Have you discovered unknown assets related to Shadow IT?

  2. Were you able to find potentially abandoned assets related to an acquisition?

  3. Has your organization been able to better prioritize vulnerabilities to focus on the most severe risks?

  4. Do you know have a better view of asset ownership in your organization?


 


Feedback?


We would love to hear any ideas you may have to improve our Defender EASM platform or where and how you might use Defender EASM data elsewhere in the Microsoft Security ecosystem or other security 3rd party applications. Please contact us via email at mdesam-pm@microsoft.com to share any feedback you have regarding Defender EASM.


 


Interested in Learning About New Defender EASM Features?


 Please join our Microsoft Security Connection Program if you are not a member and follow our Private & Public Preview events. You will not have access to this exclusive Teams channel until you complete the steps to become a Microsoft Security Connection Program member. Users that would like to influence the direction/strategy of our security products are encouraged to participate in our Private Preview events. Members who participate in these events will earn credit for respective Microsoft product badges delivered by Credly.


 


Conclusion


You now understand how to execute a simple Defender EASM PoC, to include deploying your first Defender EASM resource, identifying common personas, how to set requirements, and measure success. Do not forget! – you can enjoy a free 30-day trial by clicking on the link below.


You can discover your attack surface discovery journey today for free.

Copilot in Dynamics 365 Field Service helps take field support to the next level

Copilot in Dynamics 365 Field Service helps take field support to the next level

This article is contributed. See the original author and article here.

This post is co-authored by John Ryan, Manager Functional Architect Dynamics 365 Field Service, Avanade.

One of the most exciting things about the introduction of AI into tools people use every day to do their jobs is the way AI can help revolutionize the way people work. Especially at the frontlines of business, AI provides organizations with innovative and personalized ways to serve customers. According to IDC, 28% of organizations are investing significantly in generative AI.1 This is what’s exciting about the introduction of Copilot in Microsoft Dynamics 365 Field Service.

No doubt about it: modern solutions like Microsoft Dynamics 365 Field Service have already come a long way in helping frontline workers be more productive and efficient in helping customers. But Copilot takes things to the next level by bringing the power of next-generation AI to the frontlines, enabling faster resolution and better service.

Field engineer viewing data after the inspection of turbines on a wind farm.

Streamline Field Service operations with Copilot

Copilot provides a leap forward in the field service space.

Enabling next-level support with Copilot for Field Service in Outlook and Microsoft Teams

Email has long been a critical communications tool for frontline managers and technicians. New data from Microsoft’s 2023 Work Trend Index Annual Report reveals that over 60% of frontline workers struggle with having to do repetitive or menial tasks that take time away from more meaningful work.2 Now, the Copilot in Dynamics 365 Field Service Outlook add-in can streamline work order creation with relevant details pre-populated from emails.

So, what does that mean, exactly? Copilot can also optimize technician scheduling with data-driven recommendations based on factors such as travel time, availability, and skillset. Frontline managers can see relevant work orders and review them before creating new work orders, and they can easily reschedule or update those work orders as customers’ needs change. In addition, organizations can customize work orders for their frontline needs by adding, renaming, or rearranging fields. Even better, Copilot can assist frontline managers with work order scheduling in Microsoft Teams, saving time and effort to find the right worker for the job.

Frontline managers can also easily open the Field Service desktop app directly from the Copilot add-in via Outlook or Teams to view work orders. There, they can see booking suggestions in the work order and book a field technician without opening the schedule board. The booking is created in Microsoft Dataverse and also gets recorded on the Field Service schedule board automatically. All this saves frontline managers valuable time because they can stay in the flow of work, reduce clicks and context-switching between apps, and create work orders quickly without copy/paste errors. In the Field Service app, they can also review work order list views and edit a work order right in the list without having to reopen it.

graphical user interface, text

Getting answers faster with natural language search with Copilot in Teams

Searching work orders to find specific details about customer jobs or looking for information about parts inventory used to mean switching between apps and searching across different sources for information. Now, to search for work orders or other customer data, agents can ask Copilot through a Teams search. They simply ask what they’re looking for using natural language, and Copilot will return specific information related to their work orders in Dynamics 365 Field Service including status updates, parts needed, or instructions to help them complete the job. The more agents use Copilot, the more the AI assistant learns and can assist agents at their jobs. The future is now.

Empowering field technicians with modern user experience

Frontline managers aren’t the only team members getting a productivity boost from more modern tools. The new Dynamics 365 Field Service mobile experience, currently in preview for Windows 10 and higher, iOS, and Android devices, empowers field technicians by giving them all the relevant, most up-to-date information they need to manage work orders, tasks, services, and products and get their jobs done thoroughly and efficiently. This modern user experience supports familiar mobile navigation, gestures, and controls to streamline managing work order Tasks, Services, and Products. Technicians can save valuable time by quickly updating the status of a booking, getting driving directions to a customer site, and changing or completing work order details. They can even get detailed information about tasks with embedded Microsoft Dynamics 365 Guides, which provide step-by-step instructions, pictures, and videos.

Changing the game for frontline technicians with Copilot in mobile

For field service technicians, having Copilot generate work order summaries that include concise, detailed descriptions of services as well as pricing and costs is a game changer. Work order summaries are generated by Copilot on the fly, synthesizing information from various tabs and fields to break down tasks, parts, services, and problem descriptions into a simple narrative, making it easy for technicians to understand job requirements. And because field technicians often need to work with their hands, they can use the voice-to-text feature to update work orders by describing details including exactly what they did on a job, when they started and finished, and what parts they used. When the work is completed, they can use the app to collect a digital signature from the customer or use voice-to-text to capture customer feedback.

Copilot in Dynamics 365 Field Service is a leap forward in the field service space. Can’t wait to see what’s next!

Learn more about the AI-powered experiences in Dynamics 365 Field Service, Teams, and Microsoft’s mixed reality applications for your frontline workforce announced at Microsoft Ignite 2023:


[1] IDC Analyst Brief sponsored by Microsoft, Generative AI and Mixed Reality Power the Future of Field Service Resolution (Doc #US51300223), October 2023

[2] The Work Trend Index survey was conducted by an independent research firm, Edelman Data x Intelligence, among 31,000 full-time employed or self-employed workers across 31 markets, 6,019 of which are frontline workers, between February 1, 2023, and March 14, 2023. This survey was 20 minutes in length and conducted online, in either the English language or translated into a local language across markets. One thousand full-time workers were surveyed in each market, and global results have been aggregated across all responses to provide an average. Each market is evenly weighted within the global average. Each market was sampled to be representative of the full-time workforce across age, gender, and region; each sample included a mix of work environments (in-person, remote vs. non-remote, office settings vs. non-office settings, etc.), industries, company sizes, tenures, and job levels. Markets surveyed include: Argentina, Australia, Brazil, Canada, China, Colombia, Czech Republic, Finland, France, Germany, Hong Kong, India, Indonesia, Italy, Japan, Malaysia, Mexico, Netherlands, New Zealand, Philippines, Poland, Singapore, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, United Kingdom, United States, and Vietnam.

The post Copilot in Dynamics 365 Field Service helps take field support to the next level appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Personal Desktop Autoscale on Azure Virtual Desktop generally available

Personal Desktop Autoscale on Azure Virtual Desktop generally available

This article is contributed. See the original author and article here.

We are excited to announce that Personal Desktop Autoscale on Azure Virtual Desktop is generally available as of November 15, 2023! With this feature, organizations with personal host pools can optimize costs by shutting down or hibernating idle session hosts, while ensuring that session hosts can be started when needed.


Personal Desktop Autoscale


Personal Desktop Autoscale is Azure Virtual Desktop’s native scaling solution that automatically starts session host virtual machines according to schedule or using Start VM on Connect and then deallocates or hibernates (in preview) session host virtual machines based on the user session state (log off/disconnect).


The following capabilities are now generally available with Personal Desktop Autoscale:



  • Scaling plan configuration data can be stored in all regions where Azure Virtual Desktop host pool objects are, including Australia East, Canada Central, Canada East, Central US, East US, East US 2, Japan East, North Central US, North Europe, South Central US, UK South, UK West, West Central US, West Europe, West US, West US 2, and West US 3. It needs to be stored in the same region as the host pool objects it will be assigned to, however, we support deploying session host virtual machines in all Azure regions.

  • You can use the Azure portal, REST API, PowerShell to enable and manage Personal Desktop Autoscale.


The following capabilities are new in public preview with Personal Desktop Autoscale:



  • Hibernation is available as a scaling action. With the Hibernate-Resume feature in public preview, you will have a better experience as session state persists when the virtual machine hibernates. As a result, when the session host virtual machine starts, the user will be able to quickly resume where they left off. More details of the Hibernate-Resume feature can be found here.


Getting started


To enable Personal Desktop Autoscale, you need to:



  • Create a personal scaling plan.

  • Define whether to enable or disable Start VM on Connect.

  • Choose what action to perform after a user session has been disconnected or logged off for a configurable period of time.

  • Assign a personal scaling plan to one or more personal host pools.


A screenshot of a scaling plan in Azure Virtual Desktop called “fullweek_schedule”. The ramp-down is shown as repeating every day of the week at 6:00 PM Beijing time, starting VM on Connect. Disconnect settings are set to hibernate at 30 minutes. Log off settings are set to shut down after 30 minutes.A screenshot of a scaling plan in Azure Virtual Desktop called “fullweek_schedule”. The ramp-down is shown as repeating every day of the week at 6:00 PM Beijing time, starting VM on Connect. Disconnect settings are set to hibernate at 30 minutes. Log off settings are set to shut down after 30 minutes.


If you want to use Personal Desktop Autoscale with the Hibernate-Resume option, you will need to self-register your subscription and enable Hibernate-Resume when creating VMs for your personal host pool. We recommend you create a new host pool of session hosts and virtual machines that are all enabled with Hibernate-Resume for simplicity. Hibernation can also work with Start VM on Connect for cost optimization.


You can set up diagnostics to monitor potential issues and fix them before they interfere with your Personal Desktop Autoscale scaling plan.


Helpful resources


We encourage you to learn more about setting up autoscale and review frequently asked questions for more details on how to use autoscale for Azure Virtual Desktop. You may also find these resources helpful:


Azure AI Health Insights: New built-in models for patient-friendly reports and radiology insights

Azure AI Health Insights: New built-in models for patient-friendly reports and radiology insights

This article is contributed. See the original author and article here.

Azure AI Health Insights: New built-in models for patient-friendly and radiology insights


Azure AI Health Insights is an Azure AI service with built-in models that enable healthcare organizations to find relevant trials, surface cancer attributes, generate summaries, analyze patient data, and extract information from medical images.


Earlier this year, we introduced two new built-in models available for preview. These built-in models handle patient data in different modalities, perform analysis on the data, and provide insights in the form of inferences supported by evidence from the data or other sources.


The following models are available for preview:



  • Patient-friendly reports model* This model simplifies medical reports and creates a patient-friendly simplified version of clinical notes while retaining the meaning of the original clinical information. This way, patients can easily consume their clinical notes in everyday language. Patient-friendly reports model is available in preview.

  • Radiology insights model* This model uses radiology reports to surface relevant radiology insights that can help radiologists improve their workflow and provide better care. Radiology insights model is available in preview.


Simplify clinical reports


Patient-friendly reports is an AI model that provides an easy-to-read version of a patient’s clinical report. The simplified report explains or rephrases diagnoses, symptoms, anatomies, procedures, and other medical terms while retaining accuracy. The text is reformatted and presented in plain language to increase readability. The model simplifies any medical report, for example a radiology report, operative report, discharge summary, or consultation report.


The Patient-friendly reports model uses a hybrid approach that combines GPT models, healthcare-specialized Natural Language Processing (NLP) models, and rule-based methods. Patient-friendly reports also uses text alignment methods to allow mapping of sentences from the original report to the simplified report to make it easy to understand.


The system uses scenario-specific guardrails to detect hallucinations, omissions, and any other ungrounded content and does several steps to ensure the full information from the original clinical report is kept and no new additional information is added.


The Patient-friendly reports model helps healthcare professionals and patients consume medical information in a variety of scenarios. For example, Patient-friendly reports model saves clinicians the time and effort of explaining a report. A simplified version of a clinical report is generated by Patient-Friendly reports and shared with the patient, side by side with the original report. The patient can review the simplified version to better understand the original report, and to avoid unnecessary communication with the clinician to help with interpretation. The simplified version is marked clearly as text that was generated automatically by AI, and as text that must be used together with the original clinical note (which is always the source of truth).


 


adishachar_0-1701101891525.png


 


Figure 1 Example of a simplified report created by the patient-friendly reports model


 


Improve the quality of radiology findings and flag follow-up recommendations


Radiology insights is a model that provides quality checks with feedback on errors and mismatches and ensures critical findings within the report are surfaced and presented using the full context of a radiology report. In addition, follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are flagged.


Radiology insights inferences, with reference to the provided input that can be used as evidence for deeper understanding of the conclusions of the model. The radiology insights model helps radiologists improve their reports and patient outcomes in a variety of scenarios. For example:


 



  • Surfaces possible mismatches. A radiologist can be provided with possible mismatches between what the radiologist documents in a radiology report and the information present in the metadata of the report. Mismatches can be identified for sex, age and body site laterality. 

  • Highlights critical and actionable findings. Often, a radiologist is provided with possible clinical findings that need to be acted on in a timely fashion by other healthcare professionals. The model extracts these critical or actionable findings where communication is essential for quality care. 

  • Flags follow-up recommendations. When a radiologist uncovers findings for which they recommend a follow up, the recommendation is extracted and normalized by the model for communication to a healthcare professional. 

  • Extracts measurements from clinical findings. When a radiologist documents clinical findings with measurements, the model extracts clinically relevant information pertaining to the findings. The radiologist can then use this information to create a report on the outcomes as well as observations from the report. 

  • Assists generate performance analytics for a radiology team. Based on extracted information, dashboards and retrospective analyses, Radiology insights provides updates on productivity and key quality metrics to guide improvement efforts, minimize errors, and improve report quality and consistency.


 


 


adishachar_1-1701101891532.png


Figure2 Example of a finding with communication to a healthcare professional


 


 


adishachar_2-1701101891537.png


 


Figure 3 Example of a radiology mismatch (sex) between metadata and content of a report with a follow-up recommendation


 


Get started today



Apply for the Early Access Program (EAP) for Azure AI Health Insights here.


After receiving confirmation of your entrance into the program, create and deploy Azure AI Health Insights on Azure portal or from the command line.


adishachar_3-1701101891540.png


 


Figure 4 Example of how to create an Azure Health Insights resource on Azure portal


After a successful deployment, you send POST requests with patient data and configuration as required by the model you would like to try and receive responses with inferences and evidence.


Do more with your data with Microsoft Cloud for Healthcare


With Azure AI Health Insights, health organizations can transform their patient experience, discover new insights with the power of machine learning and AI, and manage protected health information (PHI) data with confidence. Enable your data for the future of healthcare innovation with Microsoft Cloud for Healthcare.


We look forward to working with you as you build the future of health.



 



*Important


Patient-friendly reports models and radiology insights model are capabilities provided “AS IS” and “WITH ALL FAULTS.” Patient-friendly reports and Radiology insights aren’t intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. These capabilities aren’t designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Patient-friendly reports model or Radiology insights model.


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 

Lesson Learned #454:Optimizing Connection Pooling for Application Workloads: Active Connections

This article is contributed. See the original author and article here.

A few days ago, a customer asked us to find out details about the active connections of a connection pooling, how many connection poolings their application has, etc. In this article, I would like to share the lessons learned to see these details.


 


As our customer is using .NET Core, we will rely on the following article to gather all this information Event counters in SqlClient – ADO.NET Provider for SQL Server | Microsoft Learn


 


We will continue with the same script that we are using in the previous article Lesson Learned #453:Optimizing Connection Pooling for Application Workloads: A single journey – Microsoft Community Hub, and once the link is implemented Event counters in SqlClient – ADO.NET Provider for SQL Server | Microsoft Learn, we will see what information we obtain.


 


Once we executed our application we started seeing the following information:


 

2023-11-26 09:38:18.998: Actual active connections currently made to servers		0
2023-11-26 09:38:19.143: Active connections retrieved from the connection pool		0
2023-11-26 09:38:19.167: Number of connections not using connection pooling		0
2023-11-26 09:38:19.176: Number of connections managed by the connection pool		0
2023-11-26 09:38:19.181: Number of active unique connection strings		1
2023-11-26 09:38:19.234: Number of unique connection strings waiting for pruning		0
2023-11-26 09:38:19.236: Number of active connection pools		1
2023-11-26 09:38:19.239: Number of inactive connection pools		0
2023-11-26 09:38:19.242: Number of active connections		0
2023-11-26 09:38:19.245: Number of ready connections in the connection pool		0
2023-11-26 09:38:19.272: Number of connections currently waiting to be ready		0

 


 


As our application is using a single connection string and using a single connection pooler, the details that appear below are stable and understandable. But let’s make a couple of changes to the code to see how the numbers change.


 


Our first change will be to open 100 connections and once we reach those 100, we will close and reopen them to see how the counters fluctuat. The details we observe while our application is running indicate that connections are being opened but not closed. Which is expected.


 

2023-11-26 09:49:01.606: Actual active connections currently made to servers		13
2023-11-26 09:49:01.606: Active connections retrieved from the connection pool		13
2023-11-26 09:49:01.607: Number of connections not using connection pooling		0
2023-11-26 09:49:01.607: Number of connections managed by the connection pool		13
2023-11-26 09:49:01.608: Number of active unique connection strings		1
2023-11-26 09:49:01.608: Number of unique connection strings waiting for pruning		0
2023-11-26 09:49:01.609: Number of active connection pools		1
2023-11-26 09:49:01.609: Number of inactive connection pools		0
2023-11-26 09:49:01.610: Number of active connections		13
2023-11-26 09:49:01.610: Number of ready connections in the connection pool		0
2023-11-26 09:49:01.611: Number of connections currently waiting to be ready		0

 


 


But as we keep closing and opening new ones, we start to see how our connection pooling is functioning


 

2023-11-26 09:50:08.600: Actual active connections currently made to servers		58
2023-11-26 09:50:08.601: Active connections retrieved from the connection pool		50
2023-11-26 09:50:08.601: Number of connections not using connection pooling		0
2023-11-26 09:50:08.602: Number of connections managed by the connection pool		58
2023-11-26 09:50:08.602: Number of active unique connection strings		1
2023-11-26 09:50:08.603: Number of unique connection strings waiting for pruning		0
2023-11-26 09:50:08.603: Number of active connection pools		1
2023-11-26 09:50:08.604: Number of inactive connection pools		0
2023-11-26 09:50:08.604: Number of active connections		50
2023-11-26 09:50:08.605: Number of ready connections in the connection pool		8
2023-11-26 09:50:08.605: Number of connections currently waiting to be ready		0

 


 


In the following example, we can see how once we have reached our 100 connections, the connection pooler is serving our application the necessary connection.


 

2023-11-26 09:53:27.602: Actual active connections currently made to servers		100
2023-11-26 09:53:27.602: Active connections retrieved from the connection pool		92
2023-11-26 09:53:27.603: Number of connections not using connection pooling		0
2023-11-26 09:53:27.603: Number of connections managed by the connection pool		100
2023-11-26 09:53:27.604: Number of active unique connection strings		1
2023-11-26 09:53:27.604: Number of unique connection strings waiting for pruning		0
2023-11-26 09:53:27.605: Number of active connection pools		1
2023-11-26 09:53:27.606: Number of inactive connection pools		0
2023-11-26 09:53:27.606: Number of active connections		92
2023-11-26 09:53:27.606: Number of ready connections in the connection pool		8
2023-11-26 09:53:27.607: Number of connections currently waiting to be ready		0

 


 


Let’s review the counters:


 




  1. Actual active connections currently made to servers (100): This indicates the total number of active connections that have been established with the servers at the given timestamp. In this case, there are 100 active connections.




  2. Active connections retrieved from the connection pool (92): This shows the number of connections that have been taken from the connection pool and are currently in use. Here, 92 out of the 100 active connections are being used from the pool.




  3. Number of connections not using connection pooling (0): This counter shows how many connections are made directly, bypassing the connection pool. A value of 0 means all connections are utilizing the connection pooling mechanism.




  4. Number of connections managed by the connection pool (100): This is the total number of connections, both active and idle, that are managed by the connection pool. In this example, there are 100 connections in the pool.




  5. Number of active unique connection strings (1): This indicates the number of unique connection strings that are currently active. A value of 1 suggests that all connections are using the same connection string.




  6. Number of unique connection strings waiting for pruning (0): This shows how many unique connection strings are inactive and are candidates for removal or pruning from the pool. A value of 0 indicates no pruning is needed.




  7. Number of active connection pools (1): Represents the total number of active connection pools. In this case, there is just one connection pool being used.




  8. Number of inactive connection pools (0): This counter displays the number of connection pools that are not currently in use. A value of 0 indicates that all connection pools are active.




  9. Number of active connections (92): Similar to the second counter, this shows the number of connections currently in use from the pool, which is 92.




  10. Number of ready connections in the connection pool (8): This indicates the number of connections that are in the pool, available, and ready to be used. Here, there are 8 connections ready for use.




  11. Number of connections currently waiting to be ready (0): This shows the number of connections that are in the process of being prepared for use. A value of 0 suggests that there are no connections waiting to be made ready.




These counters provide a comprehensive view of how the connection pooling is performing, indicating the efficiency, usage patterns, and current state of the connections managed by the Microsoft.Data.SqlClient.


 


One thing, that pay my attention is Number of unique connection strings waiting for pruning This means that if there have been no recent accesses to the connection pooler, we might find that if there have been no connections for a certain period, the connection pooler will be eliminated, and the first connection that is made will take some time (seconds) to be recreated, for example, in the night when we might not have active workload:


 




  1. Idle Connection Removal: Connections are removed from the pool after being idle for approximately 4-8 minutes, or if a severed connection with the server is detected​​.




  2. Minimum Pool Size: If the Min Pool Size is not specified or set to zero in the connection string, the connections in the pool will be closed after a period of inactivity. However, if Min Pool Size is greater than zero, the connection pool is not destroyed until the AppDomain is unloaded and the process ends. This implies that as long as the minimum pool size is maintained, the pool itself remains active​​.




 


We could observe in Microsoft.Data.SqlClient in the file SqlClient-mainSqlClient-mainsrcMicrosoft.Data.SqlClientsrcMicrosoftDataProviderBaseDbConnectionPoolGroup.cs useful information about it:


 


Line 50: private const int PoolGroupStateDisabled = 4; // factory pool entry pruning method
Line 268: // Empty pool during pruning indicates zero or low activity, but
Line 293: // must be pruning thread to change state and no connections
Line 294: // otherwise pruning thread risks making entry disabled soon after user calls ClearPool


 


These parameters work together to manage the lifecycle of connection pools and their resources efficiently, balancing the need for ready connections with system resource optimization. The actual removal of an entire connection pool (and its associated resources) depends on these settings and the application’s runtime behavior. The documentation does not specify a fixed interval for the complete removal of an entire connection pool, as it is contingent on these dynamic factors.


 


To conclude this article, I would like to conduct a test to see if each time I request a connection and change something in the connection string, it creates a new connection pooling.


 


For this, I have modified the code so that half of the connections receive a clearpool. As we could see new inactive connection pools shows. 


 


 

2023-11-26 10:34:18.564: Actual active connections currently made to servers		16
2023-11-26 10:34:18.565: Active connections retrieved from the connection pool		11
2023-11-26 10:34:18.566: Number of connections not using connection pooling		0
2023-11-26 10:34:18.566: Number of connections managed by the connection pool		16
2023-11-26 10:34:18.567: Number of active unique connection strings		99
2023-11-26 10:34:18.567: Number of unique connection strings waiting for pruning		0
2023-11-26 10:34:18.568: Number of active connection pools		55
2023-11-26 10:34:18.568: Number of inactive connection pools		150
2023-11-26 10:34:18.569: Number of active connections		11
2023-11-26 10:34:18.569: Number of ready connections in the connection pool		5
2023-11-26 10:34:18.570: Number of connections currently waiting to be ready		0

 


 


Source code


 


 

using System;
using Microsoft.Data.SqlClient;
using System.Threading;
using System.IO;
using System.Diagnostics;

namespace HealthCheck
{
    class ClsCheck
    {
        const string LogFolder = "c:tempMydata";
        const string LogFilePath = LogFolder + "logCheck.log";

        public void Main(Boolean bSingle=true, Boolean bDifferentConnectionString=false)
        {
            int lMaxConn = 100;
            int lMinConn = 0;
            if(bSingle)
            {
                lMaxConn = 1;
                lMinConn = 1;
            }
            string connectionString = "Server=tcp:servername.database.windows.net,1433;User Id=username@microsoft.com;Password=Pwd!;Initial Catalog=test;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=5;Pooling=true;Max Pool size=" + lMaxConn.ToString() + ";Min Pool Size=" + lMinConn.ToString() + ";ConnectRetryCount=3;ConnectRetryInterval=10;Authentication=Active Directory Password;PoolBlockingPeriod=NeverBlock;Connection Lifetime=5;Application Name=ConnTest";
            Stopwatch stopWatch = new Stopwatch();
            SqlConnection[] oConnection = new SqlConnection[lMaxConn];
            int lActivePool = -1;
            string sConnectionStringDummy = connectionString;

            DeleteDirectoryIfExists(LogFolder);
            ClsEvents.EventCounterListener oClsEvents = new ClsEvents.EventCounterListener();
            //ClsEvents.SqlClientListener olistener = new ClsEvents.SqlClientListener();
            while (true)
            {
                if (bSingle)
                {
                    lActivePool = 0;
                    sConnectionStringDummy = connectionString;
                }
                else
                {
                    lActivePool++;
                    if (lActivePool == (lMaxConn-1))
                    {
                        lActivePool = 0;

                        for (int i = 0; i = 5)
                    {
                        Log($"Maximum number of retries reached. Error: " + ex.Message);
                        break;
                    }
                    Log($"Error connecting to the database. Retrying in " + retries + " seconds...");
                    Thread.Sleep(retries * 1000);
                }
            }
            return connection;
        }

        static void Log(string message)
        {
            var ahora = DateTime.Now;
            string logMessage = $"{ahora.ToString("yyyy-MM-dd HH:mm:ss.fff")}: {message}";
            //Console.WriteLine(logMessage);
            try
            {
                using (FileStream stream = new FileStream(LogFilePath, FileMode.Append, FileAccess.Write, FileShare.ReadWrite))
                {
                    using (StreamWriter writer = new StreamWriter(stream))
                    {
                        writer.WriteLine(logMessage);
                    }
                }
            }
            catch (IOException ex)
            {
                Console.WriteLine($"Error writing in the log file: {ex.Message}");
            }
        }

        static void ExecuteQuery(SqlConnection connection)
        {
            int retries = 0;
            while (true)
            {
                try
                {
                    using (SqlCommand command = new SqlCommand("SELECT 1", connection))
                    {
                        command.CommandTimeout = 5;
                        object result = command.ExecuteScalar();
                    }
                    break;
                }
                catch (Exception ex)
                {
                    retries++;
                    if (retries >= 5)
                    {
                        Log($"Maximum number of retries reached. Error: " + ex.Message);
                        break;
                    }
                    Log($"Error executing the query. Retrying in " + retries + " seconds...");
                    Thread.Sleep(retries * 1000);
                }
            }
        }

        static void LogExecutionTime(Stopwatch stopWatch, string action)
        {
            stopWatch.Stop();
            TimeSpan ts = stopWatch.Elapsed;
            string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
               ts.Hours, ts.Minutes, ts.Seconds,
               ts.Milliseconds / 10);
            Log($"{action} - {elapsedTime}");
            stopWatch.Reset();
        }

        public static void DeleteDirectoryIfExists(string path)
        {
            try
            {
                if (Directory.Exists(path))
                {
                    Directory.Delete(path, true);
                }
                Directory.CreateDirectory(path);
            }
            catch (Exception ex)
            {
                Console.WriteLine($"Error deleting the folder: {ex.Message}");
            }
        }
    }

}

 


 


Enjoy!

A Practical Guide for Beginners: Azure OpenAI with JavaScript and TypeScript (Part 01)

A Practical Guide for Beginners: Azure OpenAI with JavaScript and TypeScript (Part 01)

This article is contributed. See the original author and article here.

Introduction


 


A Practical Guide for Beginners: Azure OpenAI with JavaScript and TypeScript is an essential starting point for exploring Artificial Intelligence in the Azure cloud. This guide will be divided into 3 parts, covering: ‘How to create the Azure OpenAI Service resource,’ How to implement the model created in Azure OpenAI Studio, and finally, how to consume this resource in a Node.js/TypeScript application. This series will help you learn the fundamentals so that you can start developing your applications with Azure OpenAI Service. Whether you are a beginner or an experienced developer, discover how to create intelligent applications and unlock the potential of AI with ease.


Responsible AI


 


Before we start discussing Azure OpenAI Service, it’s crucial to talk about Microsoft’s strong commitment to the entire field of Artificial Intelligence. Microsoft is deeply dedicated to this topic. Therefore, Microsoft is committed to ensuring that AI is used in a responsible and ethical manner. Furthermore, Microsoft is working with the AI community to develop and share best practices and tools to help ensure that AI is used in a responsible and ethical way, thereby incorporating the six core principles, which are:



  • Fairness

  • Inclusivity

  • Reliability and Safety

  • Transparency

  • Security and Privacy

  • Accountability


If you want to learn more about Microsoft’s commitment to Responsible AI, you can access the link Microsoft AI Principles.


Now, we can proceed with the article!


 


Understand Azure OpenAI Service


 


Azure OpenAI Service provides access to advanced OpenAI language models such as GPT-4, GPT-3.5-Turbo, and Embeddings via a REST API. The GPT-4 and GPT-3.5-Turbo models are now available for general use, allowing adaptation for tasks such as content generation, summarization, semantic search, and natural language translation to code. Users can access the service through REST APIs, Python SDK, or Azure OpenAI Studio.


To learn more about the models available in Azure OpenAI Service, you can access them through the link Azure OpenAI Service models.


 


Create the Azure OpenAI Service Resource


 



The use of Azure OpenAI Service is limited. Therefore, it is necessary to request access to the service at Azure OpenAI Service. Once you have approval, you can start using and testing the service!



Once your access is approved, go to the Azure Portal and let’s create the Azure OpenAI resource. To do this, follow the steps below:


 



  • Step 01: Click on the Create a resource button.


azure-openai-01.png


 


 



  • Step 02: In the search box, type Azure OpenAI and then click Create.


azure-openai-02.png


 


 


azure-openai-03.png


 


 



  • Step 03: On the resource creation screen, fill in the fields as follows:


azure-openai-04.png


 


 


Note that in the Pricing tier field, you can test Azure OpenAI Service for free but with some limitations. To access all features, you should choose a paid plan. For more pricing information, access the link Azure OpenAI Service pricing.




  • Step 04: Under the Network tab, choose the option: All networks, including the internet, can access this resource. and then click Next.




  • Step 05: After completing all the steps, click the Create button to create the resource.




azure-openai-05.png


 


 



  • Step 06: Wait a few minutes for the resource to be created.


 


azure-openai-06.png


 


Next steps


 


In the next article, we will learn how to deploy a model on the Azure OpenAI Service. This model will allow us to consume the Azure OpenAI Service directly in our code.


Oh, I almost forgot to mention! Don’t forget to subscribe to my YouTube Channel! In 2023/2024, there will be many exciting new things on the channel!


Some of the upcoming content includes:



  • Microsoft Learn Live Sessions

  • Weekly Tutorials on Node.js, TypeScript, & JavaScript

  • And much more!


If you enjoy this kind of content, be sure to subscribe and hit the notification bell to be notified when new videos are released. We already have an amazing new series coming up on the YouTube channel this week.


 


Captura de tela 2023-11-24 125740.png


 


See you in the next article!