Building a Traffic Monitoring AI application for a Smart City with Azure Percept

Building a Traffic Monitoring AI application for a Smart City with Azure Percept

This article is contributed. See the original author and article here.

Screenshot 2021-08-03 100359.jpg


 


Many smart cities are thinking about generating traffic insights using edge AI and video as a sensor.  These traffic insights can range from simpler insights such as vehicle counting and traffic pattern distribution over time to more advanced insights such as detecting stalled vehicles and alerting the authorities.


 


In this blog, I show how I am using an Azure Percept dev kit to build a sample traffic monitoring application using the reference sources and samples in GitHub provided by Microsoft along with the Azure IoT and Percept ecosystem.


 


I wanted to build a traffic monitoring application that would classify vehicles into cars, trucks, bicycles etc. and count each vehicle category to generate insights such as traffic density and vehicle type distribution over time. I wanted the traffic monitoring AI application to show me the traffic pattern distribution in a dashboard updated in real-time.  I also wanted to generate alerts and visualize a short video clip whenever an interesting event occurs (for example number of trucks exceed a threshold value).  In addition, a smart city manager would be able to pull up a live video stream when heavy traffic congestion is detected.


 


Here’s what I needed to get started


 



 


Azure-Percept.jpg


 


Azure Percept ($349 in the Microsoft store): https://www.microsoft.com/store/build/azure-percept/8v2qxmzbz9vc


HOST: NXP iMX8m processor


Vision AI: Intel Movidius Myriad X (MA2085) vision processing unit (VPU)


 



  • Inseego 5G MiFi ® M2000 mobile hotspot (reliable cloud connection for uploading events and videos)


inseego_5g_mifi_m2000_2_.png


 


Radio: Qualcomm ® snapdragon ™ x55 modem


Carrier/plan: T-Mobile 5g Magenta plan


https://www.t-mobile.com/tablet/inseego-5g-mifi-m2000


 


Key Azure Services/Technologies used



 


Overall setup and description


 


Step 1: Unbox and setup the Azure Percept


 


This step takes about 5-10 minutes when all goes well.  You can find the setup instructions here https://docs.microsoft.com/azure/azure-percept/quickstart-percept-dk-set-up.


Here are some screenshots that I captured as I went through my Azure Percept device setup process.


 


Screenshot 2021-08-03 093740.jpg


 


Key points to remember during the device setup are to make sure you note down the IP address of the Azure Percept and setup your ssh username and password so you can ssh into the Azure Percept from your host machine.


During the setup, you can create a new Azure IoT Hub instance in the Cloud or you can use an existing Azure IoT hub that you may already have in your Azure subscription.


 


Step 2: Ensure good cloud connectivity (uplink/downlink speed for events, videos and live streaming)


 


The traffic monitoring AI application I am building is intended for outdoor environments where wired connections are not always feasible or available.  Video connectivity is necessary for live streaming or uploading video clips when network connectivity is available.  For this demo, the Azure Percept device will be connecting to the cloud using a 5G device to upload events and video clips.  Make sure that the video uplink speeds over 5G are good enough for video clip uploads as well as live streaming.  Here is a screenshot of the speed test for the Inseego 5G MiFi ® M2000 mobile hotspot from T-Mobile that I am using for my setup.


 


Picture4.jpg


 


Step 3: Reference architecture


 


Here is a high-level architecture diagram of a traffic monitoring application built with Azure Percept and Azure services.  For this project, I used the Azure Percept dev kit with the single USB-connected camera (as opposed to external IP cameras) and Azure Video Analyzer.


 


Screenshot 2021-08-03 094021.jpg


 


Step 4: Build Azure eye module docker container for ARM 64


 


You will want to make a few customizations to the Azure Eye Module C++ source code tailored to your traffic monitoring application (for example, you can make customizations to only send vehicle detection events to IoT hub or you can build your own custom parser class for custom vehicle detection models).  For this project, I am using the SSD parser class with the default SSD object detection model in the Azure Eye Module.


To build a customized Azure Eye Module, first download the Azure Eye Module reference source code from GitHub. On your host machine, clone the following repo:


 

git clone https://github.com/microsoft/azure-percept-advanced-development.git

 


 


On your host machine, open a command shell and use the following command to build the Azure Eye Module docker container.  Note that you will need docker desktop running prior to running this command (I am using a Windows host):


 

docker buildx build  --platform linux/arm64  --tag azureeyemodule-xc -f Dockerfile.arm64v8  --load

 


 


Once docker image is built, tag it and push it to your ACR.


 


Step 5: Build Objectcounter docker container for arm64


 


Download the Object Counter reference source code from github. On your host machine, clone the following repo:


 

git clone https://github.com/Azure-Samples/live-video-analytics-iot-edge-python

 


 


Navigate to the folder live-video-analytics-iot-edge-pythonsrcedgemodulesobjectCounter


Build the docker container and push it to your ACR:


 

docker build  -f docker/Dockerfile.arm64 –no-cache .  -t objectcounter:0.0.1-arm64v8
docker login  -u <your_acr_name>  -p <your_acr_password>  <your_acr_name>.azurecr.io
docker push <your_acr_name>.azurecr.io/objectcounter:0.0.1-arm64v8

 


 


I made several source code changes to main.py in the objectCounter module to customize my own objectCounter docker container.  For example, I only send a video event trigger to the signal gate processor (to capture video recording of a few seconds around an event) when a certain vehicle category exceeds a threshold count. I also made customizations so that object counter can understand inference events from SSD (in-built detection engine that comes with AzureEye Module) or a custom YOLOv3 model that is external to the AzureEye module (You can read about how to run an external YOLOv3 model in my previous blog post here


https://techcommunity.microsoft.com/t5/internet-of-things/set-up-your-own-end-to-end-package-delivery-monitoring-ai/ba-p/2323165)


 


Step 6: Azure Video Analyzer For Edge Devices


 


To be able to save video recordings around interesting event detections, you will need the Azure Video Analyzer module.


You may choose to build your own custom AVA docker container from here:


https://github.com/Azure/video-analyzer.git


You can read more about the AVA and how to deploy it to an edge device here


https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/deploy-iot-edge-device


 


Step 7: Configure message routes between the Azure IoT edge modules


 


The different modules (Azure Percept Module, ObjectCounter Module and AVA Module) interact with each other through MQTT messages. 


 


Summary of the routes:



  • Azure Percept module sends the inference detection events to IoT hub which is configured to further route the messages either to blob storage or a database (for dashboards and analytics in the cloud). 

  • Azure Percept module sends the detection events to objectCounter module that implements business logic (such as object counts and aggregations which are used to trigger video recordings via the AVA module)

  • ObjectCounter module sends the aggregations and triggers to IoT hub which is configured to further route the messages either to blob storage or a database (for dashboards and analytics in the cloud). 

  • ObjectCounter module sends the event triggers to AVA so that AVA can start recording event clips


 


Here are a couple of screenshots to show how to route messages from IoT Hub to an endpoint:


 


Screenshot 2021-08-03 095308.jpg


 


Screenshot 2021-08-03 095340.jpg


 


Here is a sample inference detection event that IoT hub receives from the Azure Percept Module


 

Body":{
  "timestamp": 145831293577504,
  "inferences": [
    {
      "type": "entity",
      "entity": {
        "tag": {
          "value": "person",
          "confidence": 0.62337005
        },
        "box": {
          "l": 0.38108632,
          "t": 0.4768717,
          "w": 0.19651619,
          "h": 0.30027097
        }
      }
    }
  ]

 


 


Step 8: Set up the graph topology for AVA


 


There are multiple ways to build your own custom graph topology based on the use cases and application requirements.  Here is how I configured the graph topology for my sample traffic monitoring AI application.


 

 "sources": [
        {
          "@type": "#Microsoft.Media.MediaGraphRtspSource",
          "name": "rtspSource",
          "endpoint": {
            "@type": "#Microsoft.Media.MediaGraphUnsecuredEndpoint",
            "url": "${rtspUrl}",
            "credentials": {
              "@type": "#Microsoft.Media.MediaGraphUsernamePasswordCredentials",
              "username": "${rtspUserName}",
              "password": "${rtspPassword}"
            }
          }
        },
        {
          "@type": "#Microsoft.Media.MediaGraphIoTHubMessageSource",
          "name": "iotMessageSource",
          "hubInputName": "${hubSourceInput}"
        }
      ],
      "processors": [
        {
          "@type": "#Microsoft.Media.MediaGraphSignalGateProcessor",
          "name": "signalGateProcessor",
          "inputs": [
            {
              "nodeName": "iotMessageSource"
            },
            {
              "nodeName": "rtspSource"
            }
          ],
          "activationEvaluationWindow": "PT3S",
          "activationSignalOffset": "-PT1S",
          "minimumActivationTime": "PT3S",
          "maximumActivationTime": "PT4S"
        }
      ],
      "sinks": [  
        {
          "@type": "#Microsoft.Media.MediaGraphFileSink",
          "name": "fileSink",
          "inputs": [
            {
              "nodeName": "signalGateProcessor",
              "outputSelectors": [
                {
                  "property": "mediaType",
                  "operator": "is",
                  "value": "video"
                }
              ]
            }
          ],
          "fileNamePattern": "MP4-StreetViewAssetFromEVR-AVAEdge-${System.DateTime}",
          "maximumSizeMiB":"512",
          "baseDirectoryPath":"/var/media"
        }
      ]
    }
  }

 


 


If you are using a pre-recorded input video file (.mkv or .mp4) instead of live frames from the USB-connected camera module, then update the rtspUrl to grab frames via the RTSPsim module:


 

"name": "rtspUrl",
"value": "rtsp://rtspsim:554/media/inv.mkv"

 


 


I use the following RTSPSim container module provided by Microsoft to stream a pre-recorded video file:


 

mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2

 


 


If you are using live frames from the USB-connected camera, then grab the live rtsp stream from Azure Percept Module:


 

"name": "rtspUrl",
"value": "rtsp://AzurePerceptModule:8554/h264"

 


 


Here is a brief explanation of the media graph topology that I use:



  • There are two source nodes in the graph. 

    • First source node is the RTSP source (the RTSP source can either serve live video frames from the Percept camera module or pre-recorded video frames served via the RTSPsim)

    • Second source node is the IoT message source (this is the output of the Object Counter Trigger)



  • There is one Processor node which is the signal gate processor.  This node takes the IoT message source and RTSP source as inputs and based on the object counter trigger, the signal gate requests the AVA module to create a 5 second video recording of the detected event (-PT1S to +PT4S)

  • There is one Sink node, which is the fileSink.  This could also be an AMS asset sink.  However, currently, AMS asset sink has a limitation of minimum 30 seconds video clip duration.  Hence, I used a fileSink to save a 5 second clip and then used an external thread to upload the locally saved .mp4 files to Azure blob storage.  Note that for on-demand live streaming, I use Azure AMS.


 


You can learn more about Azure Media Graphs here:


https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/media-graph-concept


You can learn more about how to configure signal gates for event based video recording here:


https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/configure-signal-gate-how-to


 


Step 9: Dashboard to view events, videos and insights


 


You can use any web app (e.g. react.js based) and create APIs to build a traffic monitoring dashboard that shows real-time detections and video recordings from Azure IoT hub and Azure blob storage. Here is an example of a dashboard:


 


Screenshot 2021-08-03 095922.jpg


 


Here are some examples of what the Azure Percept detected for a few live and pre-recorded videos:


 


Screenshot 2021-08-03 100050.jpg


 


Screenshot 2021-08-03 100031.jpg


 


Screenshot 2021-08-03 100008.jpg


 


Screenshot 2021-08-03 095946.jpg


 


In conclusion, in just a few days, I was able to set up a quick Proof of Concept of a sample traffic monitoring AI application using Azure Percept, Azure services and Inseego 5G MiFi ® M2000 mobile hotspot! 


 


Learn more about the Azure Percept at https://azure.microsoft.com/services/azure-percept/


 


Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect an official position of Inseego Corp.


 

Simplify virtual health workflows for your frontline healthcare workers

Simplify virtual health workflows for your frontline healthcare workers

This article is contributed. See the original author and article here.

Microsoft Teams and Microsoft Bookings for healthcare virtual visit.pngHealthcare organizations need to provide more avenues of care for patients but scheduling and managing virtual visit appointments can be a time-consuming and tedious task for frontline healthcare workers.

The good news: It just got easier. Microsoft Bookings is now part of our Microsoft 365 for frontline workers offer. Bookings is available as an app in Microsoft Teams so frontline healthcare workers can schedule, manage, and conduct virtual appointments right from Teams—where they’re already working.

That means your frontline teams can save time with a simple, yet powerful tool that eases the hassle of scheduling and is integrated into their workflows. They can have a single hub with the tools they need for streamlining both care team collaboration and virtual health.

That can help your healthcare organization:



  • Quickly and efficiently increase patient access to care with virtual visits​.

  • Improve patient outcomes by simplifying collaboration across care teams.

  • Reduce costs and optimize resources by simplifying fragmented workflows.



See how frontline healthcare workers can streamline care team collaboration and virtual visits with Microsoft Teams in this video.


 


Seamless virtual health experiences for your frontline and your patients


With Bookings available as an app in Teams, your frontline can create calendars, assign staff, schedule new appointments, and conduct virtual visits and provider consults without ever leaving Teams.


Microsoft Bookings virtual visit schedule.png
Bookings can help reduce and automate repetitive scheduling tasks for frontline care teams. They can view multiple clinicians’ Outlook calendars to find an open slot to schedule a virtual visit. Customized confirmation and reminder emails and texts with a link to join can be automatically sent to patients. And when it’s time for their virtual visit, patients can easily join from a mobile device without needing to install an app.


 


In other words, you can enable seamless virtual health experiences for both your frontline and your patients.


 


Plus, you can earn and keep your patients’ trust with a virtual health platform that can help support your protection of patient data and compliance with healthcare regulations. For example, with Microsoft Teams, a Business Associate Agreement is put in place by default for customer organizations that are considered covered entities or business associates under HIPAA. Learn more about that in our recent whitepaper.

Virtual visits made easy and cost-effective


Our healthcare customers such as St. Luke’s Hospital and Lakeridge Health have enhanced patient care with virtual visits using the Bookings app in Teams and Microsoft 365. Now with Bookings included in Microsoft 365 for frontline workers, we’ve made it even easier and more cost-effective for healthcare organizations of all sizes to enable seamless virtual health.


Get started with Microsoft 365 for frontline workers.

A virtual health platform with many options to meet your needs


In addition to our Microsoft 365 for frontline workers offering, Teams and Microsoft 365 provide many options to help you meet your healthcare organization’s specific virtual health needs. You can expand virtual visit capabilities by connecting Teams to your electronic health record system, customizing experiences in your own apps, integrating into the Microsoft Cloud for Healthcare, and more. Learn more on our Microsoft Teams and healthcare page and Microsoft 365 and healthcare page. You can also get help finding the right fit for your healthcare organization with our virtual visit selection tool.

Announcing Dynamics 365 Marketing customer journey orchestration GA, it’s a game changer

Announcing Dynamics 365 Marketing customer journey orchestration GA, it’s a game changer

This article is contributed. See the original author and article here.

In engaging customers today, we need to be heard above the noise, and we need to deliver creativity and uniqueness to each customer in the moments that matter. Customers are looking for value and for companies to address their needs on their terms. Companies now can start with a deep understanding of their customers and then act on this information to engage them and make them feel like they have a one-to-one relationship with the company. This approach is going from competitive advantage to expectation. The old segment-based approach to marketing automation is no longer adequate. In today’s marketing world, we need event-triggered communications to engage customers with personalized experiences. This extends well beyond traditional marketing practices and permeates deeper into your company beyond marketing teams.

After months of anticipation, we are thrilled to announce that thenew real-time customer journey orchestration capabilities in Microsoft Dynamics 365 Marketing are now generally available to all businesses, beginning in select regions and rolling out worldwide in the months ahead.

Dynamics 365 Marketing is transforming the way both marketers and business users alike can elevate customer experiences, win customers, and earn their loyalty faster through personalized, real-time, customer-led interactions, all on a unified adaptable platform by making every interaction count.

Creating an enterprise-wide common conversation that meets customers in their exact moment of need is core to a successful growth strategy. This is exactly what makes Customer Journey Orchestration in Dynamics 365 Marketing a compelling and important addition to the Dynamics suite. It is an intuitive solution to one of CX’s greatest challenges: learning from signals and responding beyond expectations,” Liz Miller, Vice President and Principal Analyst of Constellation Research.

And Dynamics 365 Marketing experiences get even better as you also use Microsoft’s customer data platform, Dynamics 365 Customer Insights. Dynamics 365 Customer Insights makes it easy to unify customer data, augment profiles, and identify high-value customer segments. You can use profile and segment data in Dynamics 365 Customer Insights to fine-tune your targeting and further refine your journeys to drive meaningful interactions.

We’ve partnered with Forrester Vice President and Principal Analyst, Rusty Warner, to create a webinar titled Customer Journey Orchestration: Make Every Interaction Count to frame this modern marketing landscape. It’s a must-see overview that lays the foundation to understanding changing customer expectations and how marketing technology must evolve to meet these needs.

We have several customers that are already using these capabilities to elevate experiences for their customers and transform their businesses. Here are some of their stories.

Starting with a deep understanding leveraging the power of data and AI

European Forum Alpbach

The European Forum Alpbach (EFA) is a platform that drives ideas for an empowered and democratic Europe. It brings together young people, leading scholars, thinkers, scientists, policymakers, business people, and civil society actors from Europe and from all over the world to engage and contribute to its mission of shaping a stronger Europe. EFA is an incubator of ideas and an initiator of action for the European continent.

Like many organizations, disconnected tools and processes inhibited EFA from interacting with members along the customer journeys they had envisioned. By using Dynamics 365 Marketing customer journey orchestration EFA was able to bridge the gap between their onsite and online customer experiences by pulling together those unconnected systems (i.e. CRM, payments, registrations, etc.) to create a complete end-to-end experience for their members.

“It’s fairly simple with this tool where we can trigger the communication to the user based on the actions they are taking such as processing a payment. We get data from different sources and based on this data, we can trigger actions in real-time. This is something that we couldn’t do before.” Mario Nurderscher, CRM Architect & Development Partner, European Forum Alpbach.

Using real-time event triggers and AI-powered channel optimization, EFA was able to orchestrate and automate ticket purchasing for their members. When a member purchases a ticket, a journey is triggered that orchestrates that member’s experience, including confirmation emails, real-time updates, and offers during the conference, etc. It was easy to create one journey with individual outcomes based on the actions of each member.

Triggering journeys based on events to respond in moments that matter

Leatherman, the originator and category leader in high-quality multi-tools, pocket tools, and knives, sought a solution to meet their growing D2C (Direct to Consumer) communication needs. With the objective to curate a more personalized customer journey and to create user experience continuity with their online store, Leatherman leveraged Dynamics 365 Marketing customer journey orchestration to deliver an end-to-end welcome journey for their new customers.

Leatherman “[the Journey designer] is straightforward and easy to use, with an intuitive workflow layout. We have flexibility to trigger our journeys in multiple ways from our website and our other Dynamics products, and products from other vendors. The journey can also react to customer activities in real-time.”Liz Lee, IT Director, Leatherman Tool Group.

Using customer journey orchestration and Power Automate, the team at Leatherman was able to create a multi-touchpoint journey which allowed them to engage their customers across their commerce and marketing solutions using real-time custom events. This journey was executed every time a customer signed up or started to check out on their website. This allowed Leatherman to seamlessly activate new customers and to create opportunities for continued engagement along the way.

Hyper personalizing communications at scale to drive better outcomes

Team Rubicon logo“When I first heard about CJO, I immediately thought about the potential efficiencies the product would offer. To say that I was exhilarated, would be an understatement. CJO in Dynamics 365 has allowed me to build significantly fewer segments and has automated deployments that used to take my team and I hours to do. Instead now, I get to spend that time focusing on what matters; mentoring volunteer leaders and getting our volunteers into the field.”Dustin Lynch, Team Rubicon Sr. Associate, Operations Support Mobilization.

Team Rubiconis a veteran-led disaster response organization that serves communities before, during, and after disasters and humanitarian crises by mobilizing their network ofnearly150,000volunteers who largely consist offirst responders andveterans.In addition to responding to natural disasterslike hurricanes, floods, and tornadoes,Team Rubicon has beenleading vaccination efforts to help combat the COVID-19 crises.

The impact that nonprofit Team Rubicon has made in hundreds of disaster relief operations is life-changing for the people and communities they serve. The Microsoft stack and Dynamics 365 are at the core of their operations, and over the past three years, they have continued to innovate tech to support, scale, and drive their missions. Their latest tech innovation comes from the work they’re doing with Dynamics 365 Marketing customer journey orchestration.

The ability to communicate with our volunteers based on detected changes in our data and based on their actions in real time provides a huge boost to our volunteer experience. Having these notifications orchestrated reduces staffing requirements and allows our mobilization and communications teams to focus on more important tasks.” Ed Glover, Team Rubicon Technology Analyst.

By using real-time customer journey orchestration, Team Rubicon can not only re-confirm deployments, they can keep volunteers up to date with changing details such as meeting location, date or time updates.With CJO, they were able to reduce the overall number of segmentsmaintainedby35 percent and reduce the human effort to build,sendand monitorthese notifications byas much as8 hours per week. Team Rubicon is excited to expand the use of CJOto coveradditionalelementsof their overall communications strategy.

Orchestrating journeys that go beyond marketing to deliver a congruous experience to customers

One of the existing Dynamics 365 customers that have been evaluating and using real-time customer journey orchestration since its preview release is HP. The team at HP clearly knows that customer care goes beyond support. It’s about connecting the various customer interactions into a single, holistic experience that delights customers.

To achieve this, HP wants to provide both enterprise customers and home consumers with exceptional customer service. The company is working to enhance its support experience with Dynamics 365 Marketing customer journey orchestration. By providing a more personalized experience, HP keeps the customer experience positive and builds lifetime loyalty, while also reducing support call times and lowering operational costs.

HPWith customer-led journey orchestration in Dynamics 365 Marketing, our agents can gain a real-time, 360-degree view of each customer and their support experience.” Jibu Mathew, Project Manager, HP Inc.

With end-to-end journey orchestration, HP’s goal is to keep each customer informed throughout the customer support lifecycle. This is achieved by delivering a highly personalized and context-aware experience using multiple channels including SMS and email. As the customer’s support case proceeds towards resolution, the customer plus the case data are orchestrated in a unified, end-to-end journey. By bringing the customer and case data together, HP is able to deliver a consistent and highly integrated experience to their customers.

These are just a few examples of how businesses are evaluating and implementing solutions based on our new customer journey orchestration capabilities and how we are empowering customer experience-focused organizations to unleash the power of data and AI to deeply personalize customer communications so that every interaction counts in the moments that matter. When you build on our trusted and adaptable platform, the creativity and experience design possibilities are limitless and you can start using them today.

Learn more and get started

Join me over the next few weeks for a three-part blog series that will give us a deeper look into how businesses are already using customer journey orchestration and driving business value.

Until then, to learn more about how your organization can elevate your customer experiences, visit the Dynamics 365 Marketing webpage and sign up for a free trial to explore real-time customer journey orchestration and the other rich capabilities offered in Dynamics 365 Marketing.

The post Announcing Dynamics 365 Marketing customer journey orchestration GA, it’s a game changer appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Use external table on Azure SQL Managed Instance to read data from Azure SQL Database

Use external table on Azure SQL Managed Instance to read data from Azure SQL Database

This article is contributed. See the original author and article here.

Looking for an online way to consume data from Azure SQL Database on Managed Instance?


 


One of the ways to consume data online from Azure SQL Database on Managed Instance is to link the two by creating an external table, using CREATE EXTERNAL DATA SOURCE (Transact-SQL). The external table is created on Managed Instance, which with appropriate credentials can be used to query data from a matching table on SQL Database.


 


DB to MI external table.png


 


This type of setup could be useful to applications that need to consume both Managed Instance and SQL Database data, or perhaps for an online migration of data from SQL Database to Managed Instance. This article explains how to set this up.


 


Networking requirements


 


As the first step, there has to exist connectivity between Azure SQL Database and Managed Instance. The easiest option is to configure public endpoint on SQL Database and an allow inbound traffic on port 1433 from Managed Instance. This will allow Managed Instance to log in and query data from SQL Database. Alternatively, if you would like to use the private endpoint for SQL Database, the easiest option would be to deploy its private endpoint in the same VNet as Managed Instance. In this case as well, please do not forget to allow inbound traffic to SQL Database on port 1433 from Managed Instance subnet. Please note that detailed instructions on setting up the networking is beyond the scope of this article.


 


You’d need to have either of these options setup as the networking requirement:


 



  • Option 1 – public endpoint connection

    • Ensure that public endpoint access is working on SQL Database using a database client (such is SSMS)

    • If needed, open NSG rule on SQL Database side to allow 1433 the inbound traffic




 



  • Option 2 – private endpoint connection

    • Create an additional subnet in Managed Instance VNet

    • Deploy SQL Database private endpoint to the new subnet created inside Managed Instance VNet (see this tutorial for details)




In case that you perhaps have a VM inside the same VNet as Managed Instance, you can use it to test the connectivity to SQL Database. For example, use SSMS from a VM inside the Managed Instance VNet to connect to SQL Database. If this works, then Managed Instance will be able to successfully connect to SQL Database to read data.


 


Prepare table to use Azure SQL Database


 


As the first step, decide which table on SQL Database would you like to expose to Managed Instance. This could be an existing table, or you could perhaps create a new table.


 


The sample code below creates a new table AzureDBtable on an existing SQL Database, and inserts the current time stamp. Hopefully, we’d be able to read off this time stamp from Managed Instance.


 

-- Execute on Azure SQL Database
-- Create table and insert some sample data
CREATE TABLE AzureDBtable (TimeTable datetime2)
INSERT INTO AzureDBtable values (getutcdate())
SELECT @@servername AS [Server name], * FROM AzureDBtable

 


 


Using SSMS, the below shows the table created on SQL Database.


DanimirLjepava_2-1627748654560.png


For the purpose of this article, we have inserted the following time stamp to the table on SQL Database.


DanimirLjepava_0-1627747601719.png


 


Create authentication credentials on Managed Instance


 


Create authentication credentials for SQL Database on Managed Instance. This will allow Managed Instance to log onto SQL Database and query data. Use an existing database on Managed Instance for which you need to create a master key first. Then, create login credentials for SQL Database in the context of this database.


 


Provided below is a sample script which you’ll need to modify with the name of your database, new master key password, and credentials to login to SQL Database.


 

-- Execute on Managed Instance
-- Create auth credentials to SQL DB for a database on Managed Instance

-- Use an existing database on Managed Instance
USE [databasename-on-mi];
-- Create master key for database on Managed Instance
CREATE MASTER KEY ENCRYPTION BY PASSWORD='password';
-- enter credetials to login to Azure SQL Database
CREATE DATABASE SCOPED CREDENTIAL AppCredential WITH IDENTITY = 'username', SECRET = 'password';

 


 


Create external data source on Managed Instance


 


The next step is to create an external data source on Managed Instance. This will allow Managed Instance to know to which SQL Database to connect to. Modify the sample script below to include the name of database on Managed Instance, the connection URL (FQDN) to connect to SQL Database server, and also include SQL Database name. Think of it as providing connection parameters to Managed Instance on how to connect to SQL Database. You would use similar parameters is if you were connecting to SQL Database from another client, such is for example SSMS.


 

-- Execute on Managed Instance, in the context of the database used
-- Create external table on Managed Instance, link it with Azure SQL Database
CREATE EXTERNAL DATA SOURCE RemoteReferenceData
WITH
(
TYPE=RDBMS,
LOCATION='tcp:server-name.database.windows.net', -- insert SQL Database server FQDN
DATABASE_NAME='sqldb-databasename', -- insert SQL Database name
CREDENTIAL= AppCredential
);

 


 


Create external table on Managed Instance


 


The final step is to create an external table on database on Managed Instance, which will be linked with the table on SQL Database. The below script is an example connecting to the “AzureDBtable” created as the first step of this article, as a show case. In this step, it is very important that the data type(s) you define for the external table match the same data types defined for the source table on SQL Database. If data types perhaps do not match, there will be an error in querying data. This also means that if the external table structure on SQL Database changes, you will need to re-create external table on Managed Instance matching this new structure.


 

-- Execute on Managed Instance, in the context of the database used
-- Connect external table on Managed Instance with Azure SQL Database
CREATE EXTERNAL TABLE AzureDBtable
(TimeTable datetime2) -- must ensure data structure matches the one on SQL Database
WITH
(
DATA_SOURCE = RemoteReferenceData
);

 


Screenshot below shows that through SSMS we can see that an external table, connected to SQL Database, has been successfully created on Managed Instance.


DanimirLjepava_1-1627748498463.png


 


Test querying data from SQL Database on Managed Instance


 


As in the first step we have inserted a time stamp on Azure SQL Database, after creating an external table on Managed Instance, let’s see if we can query it. Execute this query on Managed Instance.


 

-- Execute on Managed Instance, in the context of the database used
-- Read data from the external table on Managed Instance. This should pull the table data from Azure SQL Database
SELECT @@servername AS [Server name], * FROM AzureDBtable

 


The result shows that the server where this query was executed was on Managed Instance, and that the time stamp matches the same time stamp inserted into the table on SQL Database. This shows that with external data source were were able to query data from SQL Database on Managed Instance online and in-real time.


DanimirLjepava_0-1627749062463.png


 


Closing remarks


 


Using external table on Managed Instance will allow you to read from, but not insert data into SQL Database. It is also possible to setup the external table the other way around — an external table on SQL Database that can be used to read data from a table on Managed Instance. See this article for details. 


 


If you find this article useful, please like it on this page and share through social media.


 


To share this article, you can use the Share button below, or this short link: http://aka.ms/mi-sqldb-externaltable


 

Apply Adaptive Network Hardening across multiple Subscriptions

Apply Adaptive Network Hardening across multiple Subscriptions

This article is contributed. See the original author and article here.


Applying network security groups (NSG) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. Adaptive network hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise, and then provides recommendations to allow traffic only from specific IP/port tuples.

For example, let’s say the existing NSG rule is to allow traffic from 100.xx.xx.10/24 on port 8081. Based on traffic analysis, adaptive network hardening might recommend narrowing the range to allow traffic from 100.xx.xx.10/29 and deny all other traffic to that port.

Adaptive network hardening recommendations are only supported on the following specific ports (for both UDP and TCP): 13, 17, 19, 22, 23, 53, 69, 81, 111, 119, 123, 135, 137, 138, 139, 161, 162, 389, 445, 512, 514, 593, 636, 873, 1433, 1434, 1900, 2049, 2301, 2323, 2381, 3268, 3306, 3389, 4333, 5353, 5432, 5555, 5800, 5900, 5900, 5985, 5986, 6379, 6379, 7000, 7001, 7199, 8081, 8089, 8545, 9042, 9160, 9300, 11211, 16379, 26379, 27017, 37215

 


Pre-Requisite:

  Az Modules must be installed

 – Service principal created as part of Step 1 must be having contributor access to all subscriptions

 



Steps to follow:


Step 1: Create a service principal



Post creation of service principal, please retrieve below values.


  1. Tenant Id

  2. Client Secret

  3. Client Id



Step 2: Create a PowerShell function which will be used in generating authorization token




function Get-apiHeader{
[CmdletBinding()]
Param
(
 [Parameter(Mandatory=$true)]
 [System.String]
 [ValidateNotNullOrEmpty()]
 $TENANTID,
 [Parameter(Mandatory=$true)]
 [System.String]
 [ValidateNotNullOrEmpty()]
 $ClientId,
 [Parameter(Mandatory=$true)]
 [System.String]
 [ValidateNotNullOrEmpty()]
 $PasswordClient,
 [Parameter(Mandatory=$true)]
 [System.String]
 [ValidateNotNullOrEmpty()]
 $resource
)
$tokenresult=Invoke-RestMethod -Uri https://login.microsoftonline.com/$TENANTID/oauth2/token?api-version=1.0 -Method Post -Body @{"grant_type" = "client_credentials"; "resource" = "https://$resource/"; "client_id" = "$ClientId"; "client_secret" = "$PasswordClient" }
$token=$tokenresult.access_token
$Header=@{
  'Authorization'="Bearer $token"
  'Host'="$resource"
  'Content-Type'='application/json'
  }
return $Header
}



 


Step 3: Invoke API to retrieve authorization token using function created in above step



Note: Replace $TenantId, $ClientId and $ClientSecret with value captured in step 1

$AzureApiheaders = Get-apiHeader -TENANTID $TenantId -ClientId $ClientId -PasswordClient $ClientSecret -resource "management.azure.com"




 


Step 4: Extracting csv file containing list of all adaptive network hardening suggestion from Azure Resource Graph


Please referhttps://github.com/MicrosoftDocs/azure-docs/blob/master/articles/governance/resource-graph/first-que…


Azure Resource graph explorer: https://docs.microsoft.com/en-us/azure/governance/resource-graph/overview


Query:

securityresources
        | where type == "microsoft.security/assessments"
        | extend source = tostring(properties.resourceDetails.Source)
        | extend resourceId =
            trim(" ", tolower(tostring(case(source =~ "azure", properties.resourceDetails.Id,
                                            source =~ "aws", properties.resourceDetails.AzureResourceId,
                                            source =~ "gcp", properties.resourceDetails.AzureResourceId,
                                            extract("^(.+)/providers/Microsoft.Security/assessments/.+$",1,id)))))
        | extend status = trim(" ", tostring(properties.status.code))
        | extend cause = trim(" ", tostring(properties.status.cause))
        | extend assessmentKey = tostring(name)
        | where assessmentKey == "f9f0eed0-f143-47bf-b856-671ea2eeed62"


data1.JPG

 

Click on “Download as CSV” and store at location where adaptive network hardening script is present. Rename the file as “adaptivehardeningextract

 

Set-Location $PSScriptRoot
$RootFolder = Split-Path $MyInvocation.MyCommand.Path
$ParameterCSVPath =$RootFolder + "adaptivehardeningextract.csv"
if(Test-Path -Path $ParameterCSVPath)                                                                          
  { 
  $TableData = Import-Csv $ParameterCSVPath
  }

foreach($Data in $TableData)
{
  $resourceid=$Data.resourceid
  $resourceURL="https://management.azure.com$resourceid/providers/Microsoft.Security/adaptiveNetworkHardenings/default?api-version=2020-01-01"
  $resourcedetails=(Invoke-RestMethod  -Uri $resourceURL -Headers $AzureApiheaders -Method GET)
  $resourceDetailjson = $resourcedetails.properties.rules | ConvertTo-Json
  $nsg = $resourcedetails.properties.effectiveNetworkSecurityGroups.networksecuritygroups | ConvertTo-Json
  if($resourceDetailjson -ne $null)
  {         
    $body=@"
    {
      "rules": [$resourceDetailjson] ,
      "networkSecurityGroups": [$nsg] 
    }
    "@
    $enforceresourceURL = "https://management.azure.com$resourceid/providers/Microsoft.Security/adaptiveNetworkHardenings/default/enforce?api-version=2020-01-01"
    $Enforcedetails=(Invoke-RestMethod  -Uri $enforceresourceURL -Headers $AzureApiheaders -Method POST -Body $body)
  }             
}