Automate Incident Assignment with Shifts for Teams

Automate Incident Assignment with Shifts for Teams

This article is contributed. See the original author and article here.

Azure Sentinel Incidents contain detection details which enable security analysts to investigate using a graph view and gain deep insights into related entities. The responsiveness of a security analyst towards the triggered incidents (also known as Mean Time To Acknowledge – MTTA) is crucial as being able to respond to a security incident quickly and efficiently will reduce the incident impact and mitigate the security threats.


 


The newly introduced Automation Rules allow you to automatically assign incidents to an owner with the built-in action. This is extremely useful when you need to assign specific incidents to a dedicated SME. It will reduce the time of acknowledgement and ensure accountability for each incident.


 


However, some organizations have a group of analysts working on different shift schedules and required the ability to assign an incident to an analyst automatically based on the working schedule to improve the MTTA.


 


In this blog, I will discuss how to extend the incident assignment capability in Azure Sentinel by using a Playbook to rotate user assignments based on shift schedules. Plus, I will also discuss how you could manage incident assignments for multiple support groups at the end of the blog.


 


 


Considerations and design decisions


 


Before we dive into the Playbook, let’s discuss some of the important points taken into consideration and the design decisions when implementing this incident assignment Playbook.


 



  • Scheduling tool

    • Shifts for Teams is used as the scheduling tool because it is available as part of the Microsoft Teams and it provides the ability to create and manage employee schedules.

    • It is easier to automate incident assignment when there is a centralized schedule management tool to keep track of employees’ timesheet or availability.




 



  • Assignment criteria

    • The goal is to assign the incidents equally across all analysts. Hence, the analyst with the least number of incidents in current shift will be assigned first.

    • We also need to consider the average time a security analyst takes to resolve a security incident (also known as Mean Time To Resolve – MTTR). In this Playbook, I have set a default value of 1 hour as the MTTR (a configurable variable) and I am using it as a condition where a security analyst must have at least 1 hour remaining in the shift to be eligible for incident assignment. For example, if a security analyst is about to go off shift in 30 minutes, the incident won’t be assigned to that analyst as the remaining time is less than the default value of 1 hour.




 



  • Notification

    • It is important to notify the assignee when an incident is being assigned.

    • In this Playbook, an email will be sent to the assignee and a comment will be added to the incident on the incident assignment.




 


 


What is Shifts for Teams?


 


Shifts is a schedule management application in Microsoft Teams that helps you create, update, and manage schedules for your team. Shifts is enabled by default for all Teams users in your organization. You can add Shifts app to your Teams menu by clicking on the ellipses (…) and select Shifts from the app list.


 


Shifts3.png


 


The first step to get started in Shifts is to populate schedules for your team. You can either create a schedule from scratch (create for yourself or on behalf of your team members) or import an existing one from Excel. In terms of permission, you need to be an Owner of the team to create the schedule. The schedules will not be visible to your team members until you publish it by clicking “Share with team” button.


 


Here is an example of how a Shifts schedule looks like. If you’re an owner of multiple teams, you can toggle between different Shifts schedules to manage them.


 


pic2.png


 


 


The Logic App


 


Download link:


 


Here is the link to the Logic App template.


 


 


Prerequisites:


 


1. User account or Service Principal with Azure Sentinel Responder role


– Create or use an existing user account or Service Principal or Managed Identity with Azure Sentinel Responder role.


– The account will be used in Azure Sentinel connectors (Incident Trigger, Update incident and Add comment to incident) and a HTTP connector.


– This blog will walk you through using System Managed Identity for the above connectors.


 


2. Setup Shifts schedule


– You must have the Shifts schedule setup in Microsoft Teams.


– The Shifts schedule must be published (Shared with team).


 


3. User account with Owner role in Microsoft Teams


– Create or use an existing user account with Owner role in a Team.


– The user account will be used in Shifts connector (List all shifts).


 


4. User account or Service Principal with Log Analytics Reader role


– Create or use an existing user account with Log Analytics Reader role on the Azure Sentinel workspace.


– The user account will be used in Azure Monitor Logs connector (Run query and list results).


 


5. An O365 account to be used to send email notification.


– The user account will be used in O365 connector (Send an email).


 


 


Post Deployment Configuration:


 


1. Enable Managed Identity and configure role assignment.


 


a) Once the Playbook is deployed, navigate to the resource blade and click on Identity under Settings.


 


SystemMI.png


 


          


          b) Select On under the System assigned tab. Click Save and select Yes when prompted.


 


   c) Click on Azure role assignments to assign role to the Managed Identity.


 


          d) Click on + Add role assignment.


 


   e) Select Resource group under Scope and select the Subscription and Resource group where the Azure Sentinel Workspace is located.


       (Note: it’s the subscription and resource group of the Azure Sentinel workspace, not the Logic App).


 


   f) Select Azure Sentinel Responder under Role and click Save.


 


AssignResponderRoleMI.png


 


         


             


2. Configure connections.


 


          a) Edit the Logic App to find the below connectors marked with  pic1.png.


               – When Azure Sentinel incident creation rule was triggered.


               – List all shifts.


               – Run query and list results – Get user with low assignment.


               – Update incident.


               – Add comment to incident.


               – Send an email.


 


           b) We will leverage the Managed Identity we configured in step 1 for the following Azure Sentinel Connectors


               (hint: these are the ones with Azure Sentinel logo):


               – When Azure Sentinel incident creation rule was triggered.


               – Update incident.


               – Add comment to incident.


 


               i) On the first connector (trigger), select Add new


ConfigureMIConnection0.png


 


               ii) Click “Connect with managed Identity”.


ConfigureMIConnection.png


 


  iii) Specify the connection name and click Create.


ConfigureMIConnection2.png


 


  iv) On the remaining Azure Sentinel Connectors, select the connection you created earlier.


ConfigureMIConnection3.png


 


     c) Next, fix the below remaining connectors by adding a new connection to each connector and sign in with the accounts described under prerequisites.


– List all shifts.


– Run query and list results – Get user with low assignment.


– Send an email.


 


 


3. Select the Shifts schedule


 


a) On the List all shifts connector, click on the X sign next to Team field for the drop-down list to appear.


 


Pic3 (1).png


 


b) Select the Teams channel with your Shifts schedule from the drop-down list.


 


ListAllShifts.png


 


c) Save the Logic App once you have completed the above steps.


 


 


Assign the Playbook to Analytic Rules using Automation Rules


 


1) Before you begin, ensure you have the following permissions:


    – Logic App Contributor on the Playbook.


    – Owner permission on the Playbook’s resource group (to grant Azure Sentinel permission to the playbooks’ resource groups).


 


2) Next, create an Automation Rule to assign the Playbook to your analytic rules with you specified conditions.


 


3) In the below example, I am creating an Automation Rule to run the incident assignment Playbook for selected Analytic rules and the severity equals to “High” and “Medium”.


 


AutomationRule.png


 


4) Uder Actions -> select Run Playbook and choose the Playbook.


 


    Note: If the Playbook appeared as grey-out in the drop-down list, that means Azure Sentinel doesn’t have permission to run this Playbook.


 


          RunPlaybookPermission.png


You can grant permission on the spot by selecting the Manage playbook permissions link and grant permission to the playbooks’ resource groups.


 


RunPlaybookPermission2.png


 


5) After that, you will be able to select the Playbook. Click Apply.


 


RunPlaybookPermission3.png


 


Note: If you received the error message “Caller is missing required Playbook triggering permissions” when saving the Automation Rule, that means you do not have the “Logic App Contributor” permission on the Playbook.


 


 


Incident Assignment Logic


 


1) When an incident is generated, it triggers the Logic app to get a list of analysts who are on-shift at that time (analysts with time-off will be excluded from the incident assignment).


 


2) Analyst with the least incidents assigned on the current shift will be assigned incident first. When there are multiple analysts with same incident count, the selection be will based on the order of the analyst’s AAD objectId.


 


3) Analysts must have at least 1 hour left (default value) in their shift to be eligible for assignment.


    For example, if the shift of an analyst is ending at 6pm. The analyst will not be assigned between 5pm and 6pm.


 


    You can change the variable value of “ExpectedWorkHoursPerIncident” to 0 if you want the analyst to be assigned during the final shift hour.


 


pic4.png


 


4) Here is a sample assignment flow for your reference:


 


    In this example, the following shift schedules have been configured for 4 analysts.


 


























User Object Id



Shift Schedule



A1



8am to 6pm



A2



8am to 6pm



A3



4pm to 2am



A4



4pm to 2am



 


Here is how the incident assignment would work based on the incident assignment logic:


 



















































Incident Creation Time



Assign to



Total



 


8:00am



 


A1



A1=1


A2=0


A3=0


A4=0



 


9:45am



 


A2



A1=1


A2=1


A3=0


A4=0



 


2:00pm



 


A1



A1=2


A2=1


A3=0


A4=0



 


4:00pm



 


A3


 


 



A1=2


A2=1


A3=1


A4=0



 


4:10pm



 


A4



A1=2


A2=1


A3=1


A4=1



 


5:00pm



 


A3


 


*A3 is assigned instead of A2 because ExpectedWorkHoursPerIncident is set to 1.



A1=2


A2=1


A3=2


A4=1



 


5:50pm



 


A4


 


*A4 is assigned instead of A2 because ExpectedWorkHoursPerIncident is set to 1.



A1=2


A2=1


A3=2


A4=2



 


11:20pm



 


A3



A1=2


A2=1


A3=3


A4=2



 


 


 


Notification


 


Email Notification:


 



  1. When an incident is assigned, the incident owner will be notified via email.


 



  1. The email body has a direct link to the incident page and a banner with color mapped to incident’s severity (High=red, Medium=orange, Low=yellow and Informational=grey).


 Email.png


 


 


Incident Comment:


 



  1. Comment will be added to the incident for the assignment with the name of the Playbook.


 Comment.png


 


 


 


 


Managing Incident assignment for multiple Support Groups


 


There are times when you need to assign incidents based on different incident types and support groups. For example, Team A is responsible for Azure AD incidents, Team B is responsible for Office 365 incidents while the rest of the incidents will go to Team C.



This can be achieved by creating Shifts schedule for each support group and deploy a separate Playbook for each group. Then, assign the Logic App to the analytic rules accordingly as illustrated in the diagram below:


 


MultipleSupportGroup.png


 


 


Below are the sample Automation Rules created for multiple Shifts channels (Support Groups).


 


TeamABC.png


 


Each Automation Rule is configured for different Team:


 


EditAutomationTeamA.png


Automation Rule for Team A 


 


 


 


EditAutomationTeamB.png


Automation Rule for Team B


 


 


Summary


 


I hope you find this useful. Give it a try and hopefully it would help in reducing the time of acknowledgement (especially for critical incidents) in your environment.


 


 


Special thanks to @liortamir , @Yaniv Shasha , @edilahav and @Ofer_Shezaf for the review.

Azure Sentinel Side-by-Side with Splunk via EventHub

Azure Sentinel Side-by-Side with Splunk via EventHub

This article is contributed. See the original author and article here.

As highlighted in my last blog posts (for Splunk and Qradar) about Azure Sentinel’s Side-by-Side approach with 3rd Party SIEM, there are some reasons that enterprises leverage Side-by-Side architecture to take advantage of Azure Sentinel capabilities.


 


For my last blog post I used the Microsoft Graph Security API Add-On for Splunk for Side-by-Side with Splunk. Another option would be to implement a Side-by-Side architecture with Azure Event Hub. Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process events per second (EPS). Data sent to an Azure Event Hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.


 


This blog describes the usage of Splunk app Splunk Add-on for Microsoft Cloud Services in Side-by-Side architecture with Azure Sentinel.


 


For the integration, an Azure Logic app will be used to stream Azure Sentinel Incidents to Azure Event Hub. From there Azure Sentinel Incidents can be ingested into Splunk.


 


Let’s go with the configuration!


 


Preparation


The following tasks describe the necessary preparation and configurations steps.



  • Onboard Azure Sentinel

  • Register an application in Azure AD

  • Create an Azure Event Hub Namespace

  • Prepare Azure Sentinel to forward Incidents to Event Hub

  • Configure Splunk to consume Azure Sentinel Incidents from Azure Event Hub

  • Using Azure Sentinel Incidents in Splunk


 


Onboarding Azure Sentinel


Onboarding Azure Sentinel is not part of this blog post. However, required guidance can be found here.


 


Register an Application in Azure AD


The Azure AD app is later required to use it as service principle for the Splunk Add-on for Microsoft Cloud Services app.


 


To register an app in Azure AD open the Azure Portal and navigate to Azure Active Directory > App Registrations > New Registration. Fill the Name and click Register.


 


Screenshot 2021-04-29 162548.png


 


Click Certificates & secrets to create a secret for the Service Principle. Click New client secret and make note of the secret value.


 


Screenshot 2021-04-29 162742.png


 


For the configuration of Splunk Add-on for Microsoft Cloud Services app, make a note of following settings:



  • The Azure AD Display Name

  • The Azure AD Application ID

  • The Azure AD Application Secret

  • The Tenant ID


 


Create an Azure Event Hub Namespace


As next step create an Azure Event Hub Namespace. You can use an existing one, however for this blog post I decided to create a new one.


 


To create an Azure Event Hub Namespace open the Azure Portal, and navigate to Event Hubs > New. Define a Name for the Namespace, select the Pricing Tier, Throughput Units and click Review + create.


 


Screenshot 2021-04-29 162948.png


 


Review the configuration and click Create.


 


Screenshot 2021-04-29 163041.png


 


Once the Azure Event Hub Namespace is created click Go to resource to follow the next steps.


 


Screenshot 2021-04-29 163135.png


 


Click Event Hubs, after to Event Hub to create an Azure Event Hub within the Azure Event Hub Namespace.


 


Screenshot 2021-04-29 163234.png


 


Define a Name for the Azure Event Hub, configure the Partition Count, Message Retention and click Create.


 


Screenshot 2021-04-29 163340.png


 


Navigate to Access control (IAM) and click to Role assignments. Click + Add to add the Azure AD Service Principle created before and delegate as Azure Event Hubs Data Receiver and click Save.


 


Picture2.png


 


For the configuration of Splunk Add-on for Microsoft Cloud Services app, make a note of following settings:



  • The Azure Event Hub Namespace Host Name

  • The Azure Event Hub Name


 


Prepare Azure Sentinel to forward Incidents to Event Hub


For the forwarding for Azure Sentinel Incidents to Azure Event Hub you need to firstly configure an Azure Logic App, and secondly an Automation Rule in Azure Sentinel to trigger the playbook for any Incidents in Azure Sentinel.


 


For my scenario I configured an Azure Logic App as following shown:


 


Screenshot 2021-04-29 163523.png


 


Startwith the Azure Sentinel trigger When Azure Sentinel Incident Cration Rule was Triggered. Parse the output for later usage. For the Azure EventHub connection, define first the connection to Azure Event Hub and select the Azure EventHub name. Define a JSON format as content to send selected fields from an Azure Sentinel Incident to Azure EventHub. For my case I want to forward the fields Title, Severity, ProviderName and the IncidentURL to Azure EventHub.


 


You can also have the full Body from Parse JSON output as well, to forward all attributes of an Azure Sentinel Incident.


 


Screenshot 2021-04-29 163623.png


 


Save the Azure Logic App and navigate to Azure Sentinel > Automation. From here you can create an Automation rule to trigger the Azure Logic App, created in previous step.


 


Click to + Create and select Add new rule.  


 


Screenshot 2021-04-29 163757.png


 


Define a Name for the Automation rule name and define the Conditions. As I want to trigger the Azure Logic App for any Analytics rule in Azure Sentinel, I let the Condition as is – “all” (for “all rules” is selected, you can choose specifc rules to inculde or exclude. Select the Run Playbook as Action and the Azure Logic App created before and click Apply.


 


Picture3.png


 


Once the configuration is completed, you can review the Automation rule in Automation page.


 


Configure Splunk to consume Azure Sentinel Incidents from Azure Event Hub


 


To ingest Azure Sentinel Incidents forwarded to Azure Event Hub there is a need of to install the Splunk App, Splunk Add-on for Microsoft Cloud Services.


 


For the installation open the Splunk portal and navigate to Apps > Find More Apps. For the dashboard find the Splunk Add-on for Microsoft Cloud Services app and Install.


 


Picture4.png


 


Once installed, navigate to App Splunk Add-on for Microsoft Cloud Services > Azure App Account to add the Azure AD Service Principles, and use the noted details from previous step. Click  Add and define a Name for the Azure App Account, add the Client ID, Client Secret, Tenant ID and choose Azure Public Cloud as Account Class Type. Click Update to save and close the configuration.


 


Picture5.png


 


Now navigate to Inputs within the Splunk Add-on for Microsoft Cloud Services app and select Azure Event Hub in Create New Input selection.


 


Picture6.png


 


Define a Name for the Azure Event Hub as Input, select the Azure App Account created before, define the Event Hub Namespace (FQDN), Event Hub Name, let the other settings as default and click Update to save and close the configuration.


 


Picture7.png


 


Using Azure Sentinel Incidents in Splunk


 


Once the ingestion is processed, you can query the data by using sourcetype=”mscs:azure:eventhub” in search field.


 


Picture8.png


 


Summary


 


We just walked through the process of how to implement Azure Sentinel in Side-by-Side with Splunk by using the Azure Event Hub.


 


Stay tuned for more us cases in our Blog channel!


 


Thank you for


 


Many thanks to Clive Watson for brainstorming and ideas for the content.

[Guest Blog] Touching Light: Making Music in Mixed Reality

[Guest Blog] Touching Light: Making Music in Mixed Reality

This article is contributed. See the original author and article here.

This blog is written by Ian Riley, an inspiring musician, as a part of the Humans of Mixed Reality series. He shares his experience in music and technology, which led him to developing music in mixed reality. 


 


touching light cover.png


Touching Light is an original musical work for Percussionist and Mixed Reality Environment that explores the border areas between the physical world that we see around us, and the worlds of infinite possibility that each of us holds in our imagination.


 


 


“A dream we dream together is called reality.”
          – Alex Kipman at the Microsoft Ignite Keynote, 2021

Mixed Reality, fundamentally, asks us to see the world differently, something that is so akin to the ways that as performers, we ask our audiences not just to hear, but to listen. By drawing the attention of those around us to something that we believe to be compelling, and even more when we can share something that we have had a hand in creating, we access a unique moment, a shared imaginative space and, in my experience, this is just the sort of thing that users of Mixed Reality are hoping to find.

“My dad’s a computer programmer.” I usually lead with this as it seems to put folks at ease when they contact me, hoping that there is some ‘secret’ for how I, someone with a doctorate in music, not computer science, learned to work with Mixed Reality. Yet, while his influence has certainly been a continual inspiration to me, it was in fact my mother’s encouragement to pursue training in the arts that positioned me to begin developing Touching Light. Despite its deep connectedness to technology, Touching Light is first a foremost a musical MR application.


 


Music and Technology


It was in pursuit of my master’s degree that I first became deeply interested in music technology. I was fascinated by the sounds that electronic instruments could create, and that curiosity would eventually lead me to perform an all percussion and live electronics final recital during my first graduate degree. This sort of recital was a first for the small college that I was attending and, though I was unaware of this at time, something that is still uncommon in the world of contemporary percussion. Those experiences would eventually lead me to pursue a DMA in Percussion Performance at West Virginia University with a desire to continue to explore and innovate with percussion and live electronics.


 


When I first started my DMA, I was aware of the work that Microsoft was doing with the HoloLens 1 (introduced in 2016), but it wasn’t until my wife and I moved to Morgantown, West Virginia that I saw the first marketing for the Microsoft HoloLens 2 on February 24th, 2019. I was amazed. Watching it again today still makes me smile, but I guess that’s good marketing for you! As I continued my studies at WVU, I kept thinking about that video, about the HoloLens 2, and about Mixed Reality. What seemed like a pipe dream in February, making music in Mixed Reality, would become a real possibility in mind in November of that same year.


 


Look toward the future – stop thinking about what is cutting edge right now and to start thinking about the cutting edge of the cutting edge; because that’s where we’re going to need people to do work.
          – Dr. Norman Weinberg, at PASIC 2019

And I knew that the future was Mixed Reality.


 


simplicity screenshot.png

 Playing vibraphone while using a holographic audio mixer from Touching Light

 


Preparing for HoloLens 2


Sometimes it is the mere fact that you know what you don’t know that can provide the clearest path forward. Soon after the reveal of the HoloLens 2 in early 2019, the first seeds of what would eventually become Touching Light began to take root. At the time, while I had done some minimal computer programming experience from high school (Java, and some HTML), since I began studying music in college, I had had little time or reason to engage with the ‘coding’ side of technology apart from some basic formatting for websites.


 


Knowing that the HoloLens 2 would likely run on something like C# or Visual Basic, I began thinking about other ways that I could engage with code-based music technology and would eventually teach myself how to build rudimentary circuits to trigger lighting and audio effects. Concurrent to this work, I also more fully invested myself into learning about audio recording and engineering, recording and editing my own performance videos from recitals and other concerts. Yet for all this experience, I still didn’t know how to program the HoloLens 2.


 


Learning Mixed Reality


When the first news of the global coronavirus pandemic entered the public awareness in the United States, it was met by a mixture of genuine concern, reasonable skepticism, and in some cases, outright dismissal. Living in West Virginia, the scope of the pandemic didn’t really hit home until the University received email correspondence from university president outlining the realities of campus closures, and the transition to online delivery for the remainder of the semester as the university endeavored to minimize the risk to the WVU community in the face of uncertain times. In the face of what seemed at the time to be indefinite lockdown, I found myself able to do what anyone would do with a sudden abundance of free time… learn how to code for Mixed Reality!


 


Over the course of the next several months, particularly during the summer of 2020, through a series of free tutorials, I learned the basics of 3-D modeling using a program called Blender, a modeling engine that is similar in many ways to the sort of interface I would eventually work with in Unity. Upon ordering a HoloLens 2 from Microsoft in early July, I quickly transitioned to Unity while familiarizing myself with the sorts of gestures and interactions that drive the HoloLens 2 holographic interface.


 


With all the components finally in hand, then began the work of writing, rehearsing, and performing Touching Light. Core to the performative practice of music, and particularly to that of the percussionist the same sorts of interactions that I already employed as a performer would serve as the conceptual framework from which the three ‘dimensions of translucence’ would be derived. These dimensions (modeled after the three coordinate dimensions in physical space) would serve to ground my creative work in the sorts of real decisions that I already knew how to make because of my work with percussion.


 


soliloquy screenshot.png

Improvising on a marimba in response to a rotating carousel of landscapes 

 


Developing Music in Mixed Reality


I knew that I wanted Touching Light to be mobile. The promise of the HoloLens 2, and Mixed Reality in general, is that there are ‘no strings attached;’ if you wear this device, that is all you need to enter a Mixed Reality environment. I intentionally connected that idea of mobility to the sorts of interactions and environments that the user engages throughout the work. Even Soliloquy, the second movement of Touching Light which features a large carousel of static images, does not extend far beyond the anticipated ‘near-field’ (that which is within reach) that a percussionist will be used to engaging with. Everything in Touching Light, whether virtual or physical follows the design ethos of ‘always being within reach.’


 


The unique opportunity to engage music-making and Mixed Reality is not something that I take lightly; what began as a pipe dream just over a year ago has had a significant impact on the ways that I engage with both music and technology. I was pleasantly surprised to discover that Mixed Reality is a profoundly creative medium, and as such, engages easily with the process of music-making. From the deeply satisfying manipulation of a standing wave through the miniscule gestures of a rotating hand, to the shocking immersion of a massive holographic carousel slowly rotating around you while you perform, there is something much more connective about the spatial interactions presented by MR than the limitations of peripherals like a mouse and keyboard to control those same musical and visual elements.


 


synecdoche screenshot.png

Exploring tuned Thai gongs while manipulating spatialized virtual instruments 


 

Making Music in Mixed Reality (How to Get Started, and Why You Should)


Already, so much of what we do as musicians is, within the context of society at large, a niche endeavor; for the percussionist, these degrees of separation can seem even more severe. But in the same ways that we as artists commit ourselves to the craft of music, and the practice of music-making, engaging with MR has only served to deepen those sorts of commitments for me.


 


For Musicians or (“Performers”)


For those individuals who are interested in the musical side of Mixed Reality, the first step to get your hands on a platform. Touching Light is obviously designed with the Microsoft HoloLens 2 in mind, but similar functionality is available through any number of other VR headsets. Once you have a platform, you will need to decide what you will perform. If you are working with the Microsoft HoloLens 2, a great place to start is with Touching Light! You can download the complete Unity file package here. Follow the instructions from the Microsoft Mixed Reality Documentation, beginning at “1. Build the Unity Project.” Once you have deployed the application to your HoloLens 2, load up the application, and explore!


 


One of the most profound discoveries that I have made while working with this technology is just how musical it can be. There is something about engaging with technology within the Mixed Reality volume, about ‘spatial computing,’ that seems intuitive and artistic. This simple fact has even more deeply convinced me that music-making in Mixed Reality is not just an interesting possibility, but a deeply meaningful inevitability.


 


For Programmers (or “Composers”)


For those individuals who may be more interested in the nuts-and-bolts of developing musical applications for Mixed Reality, the first step is to familiarize yourself with a compiler. If you are interested in programming for the Microsoft HoloLens 2, the de facto solution at present is the Unity Development Engine, though support for other compilers is becoming increasingly available. You can download the Unity Hub for free from their website, and then following the instructions in the Microsoft Mixed Reality Documentation, beginning at “1. Introduction to the MRTK tutorials,” you can begin to develop your first Mixed Reality application.


 


I would strongly advise that, once you get a handle on the basic functionality of the compiler and complete some of the beginning MRTK tutorials, take some time to consider what sorts of functionality you would like your application to demonstrate, the connect with the Microsoft MR community (via Slack or the Microsoft MR Tech Community forums) and connect with other who may be able to answer your questions, and even help you with your project design.


 


Throughout the development process of Touching Light, I was surprised at not only how easy it was to onboard myself to Mixed Reality development by using the MRTK, but also by how friendly and helpful the then-current MR development community was. Whenever I had a question, or was struggling with some element of implementation, I would quickly be directed to the relevant documentation, YouTube video, or other resource that very often addressed the exact issue I was having without ever need to post snippets of code or consult more directly with someone on the project. As a bonus, I was also able to connect with a handful of individual who had a particular interest in developing creative applications for the HoloLens 2.


 


Touching Light


I had the distinct opportunity to present Touching Light in a public recital on Saturday, May 1st, 2021. 

 


Only the beginning


Touching Light is only the beginning. It is my sincere hope that this project will serve to orient, assist, and inspire musicians, artists, and audiences alike as we continue to navigate an increasingly digital and virtual existence. Perhaps more than any other time in history, only compounded by the incredible circumstances surrounding global health and the subsequent impact that a response to such scenarios require, we have been forced to think differently about technology, and for those of us who found ourselves suddenly unable to engage in live musical performances, neither as artists nor audiences, it is my conviction that mediums like Mixed Reality will only become more essential to exploring ‘liveness’ within the context of digital and virtual spaces.


 


The work was designed during the global coronavirus pandemic of 2020-21 and it is my hope that Touching Light reminds each of us that, despite everything, we are never truly alone; there is a world beyond this one if we are only willing to reach out and touch it.


 


faculty photo 2.jpg


A photo with members of the WVU Percussion Faculty after the recital
[from left: Pf. Mark Reilly, Dr. Mike Vercelli, Ian Riley, and Pf. George Willis]


 


Resources for Making Music in Mixed Reality


Microsoft HoloLens 2


Unity Hub


Microsoft MRTK & MR Tutorials


HoloDevelopers Slack Channel


Microsoft MR Tech Community Forums


Touching Light Source Code


ianrileypercussion.com 


Riley, Ian T. “Touching Light: A Framework for the Facilitation of Music-Making in Mixed Reality.”
     West Virginia University
, West Virginia University Press, 2021.


Meet the 2021 Imagine Cup World Championship judges

Meet the 2021 Imagine Cup World Championship judges

This article is contributed. See the original author and article here.

The stage is set for the 19th annual Imagine Cup World Championship, taking place during Microsoft Build’s digital experience on May 25. Four finalist teams from across the world are bringing their innovations for impact to showcase globally. Focused on four social good categories – Earth, Education, Healthcare, and Lifestyle – their ideas encompass the Imagine Cup’s mission to empower every student to apply technology to solve issues in their local and global communities.  


 


In the 2021 competition, students reimagined a future through projects guided by accessibility, sustainability, inclusion, equality, and passion. Submitted solutions covered a variety of current issues, including a 3D sign-language animation, a virtual game to combat social isolation, an early detection platform for Parkinson’s Disease, an intelligent bee keeping system, and more.   


 


On May 25, our four finalists will present their innovations for the chance to take home USD75,000 and mentorship with Microsoft CEO, Satya Nadella. A panel of expert World Championship judges will assess each project. With combined industry and personal experience in diversity leadership, startups, founding businesses, and applying tech for social impact, our judges will apply their knowledge to evaluate the most inclusive and original solution with the potential to make a global difference.  


 


Imagine Cup judges dedicate their personal time and experience to help empower the next generation of developers. We’ve been fortunate to have a diverse panel of industry experts, from around the world, leading up to the World Championship, including Devendra Singh, CTO at PowerSchool, Kai Frazier, Founder at KaiXRNeil Sebire, Chief Clinical Data Officer at HDK UK, and Jason Goldberg, Chief Commerce Strategy Officer at Publicis, and more.  


 


For the first time in Imagine Cup history, we are pleased to introduce a panel of all women judges for the World Championship. During the competition, each team will pitch their project and demo their technology, followed by questions from judges. Who will take home the trophy? Join our hosts, Tiernan Madorno, Microsoft Business Program Manager, and Donovan Brown, Microsoft Principal Program Manager, and tune into the show on May 25 at 1:30pm PT to find out!  


 


Meet the World Championship judges 


 


Student_Developer_Team_0-1620401750658.jpeg


Jocelyn Jackson – National Society of Black Engineers National Chair, 2019-2021 


 


Student, researcher, leader, and change agent are just a few descriptors of Jocelyn Jackson. In her final term as the National Chair of the National Society of Black Engineers (NSBE), Jocelyn led NSBE through one of the hardest years it has faced. Through the COVID-19 pandemic as well as the racial injustice reckoning in America, Jocelyn stayed dedicated to using her leadership and voice to make a difference in the lives of other young Black men & women interested in engineering, and to make engineering a more diverse and accepting field for all. As National Chair, Jocelyn made massive strides to accomplish the current strategic goal of NSBE: 10K by 2025, or to graduate 10,000 Black engineers annually by 2025 by launching NSBE’s newest 5 year strategic plan ‘Game Change 2025.’ During her last 3 years at NSBE, Jocelyn managed & led the board of directors to ensure the best overall experience of NSBE stakeholders.  


 


Originally from Davenport, Iowa, Jackson received her bachelor’s and master’s degrees in mechanical engineering at Iowa State University, where her thesis research focused on the development of elastomeric coatings with reduced wear for ice-free applications. She is a second-year doctoral student in Engineering Education Research at the University of Michigan. Her current research works toward advancing equity in STEM and STEM entrepreneurship.  
 


Student_Developer_Team_1-1620401750663.jpeg


Enhao Li – Co-Founder and CEO of Female Founder School 
 


Enhao Li is the Co-Founder and CEO of Female Founder School. Enhao studied Economics at Harvard and in a former life was an investment banker for fast-growing technology companies – helping to take companies like Pandora public, but she was always itching to be a founder herself. It wasn’t until she finally took the leap and started on her own company did she discover just how unprepared she was; she did all of the wrong things, wasted time and money, only to finally learn that there was a way to do this. Since then, she has become obsessed with learning how to build successful companies from experienced founders and investors and sharing it with new founders. That is where Female Founder School came from – her own personal experiences and a mission to make it easier for anyone especially women to build successful companies of their own.  



Student_Developer_Team_2-1620401750670.jpeg


Toni Townes-Whitley – President, US Regulated Industries, Microsoft   
 
As president of US Regulated Industries at Microsoft, Toni Townes-Whitley leads the US sales strategy for driving digital transformation across customers and partners within the public sector and commercial regulated industries.  With responsibility for the 4900+ sales organization and ~$15B P&L, she is one of the leading women at Microsoft, and in the technology industry, with a track record for accelerating and sustaining profitable business and building high-performance teams.  


 


Her organization is responsible for executing on Microsoft’s industry strategy and go-to-market for both public sector and regulated industries in the United States, including Education, Financial Services, Government, and Healthcare. In addition to leading a sales organization, Townes-Whitley is helping to steer the company’s work to address systemic racial injustice – with efforts targeted both internally at representation and inclusion; as well as externally at leveraging technology to counter prevailing societal challenges. She has developed expertise and speaks publicly about “Civic Technology”, applying tech innovation for social impact.  


 


——————————– 


Don’t miss out on the chance to see which team will win it all at the Imagine Cup World Championship! Plus, as a student at Microsoft Build, you can enhance your own developer skills and prepare to create the next great project. Register at no cost for the Student Zone now 

Gogo soars through industry contraction by switching to Azure AD

This article is contributed. See the original author and article here.

Hello! In today’s “Voice of the Customer” blog, Chris Szorc, Director of IT Engineering for Gogo, explains how the company cut costs and streamlined their identity and access management as the pandemic was grounding their airline partners, drying up revenue, and forcing thousands of employees to work remotely. By leveraging their existing Azure subscription, Chris and her IT team were able to migrate thousands of internal and external users to Microsoft Azure Active Directory for simplified, secure access across their enterprise.


 


Editor’s Note:


This story began in May 2020 when Gogo served both Commercial Aviation and Business Aviation. In December 2020, Gogo’s Commercial Aviation business was sold to Intelsat. As a result, the structure and business model has changed drastically for Gogo, which now has approximately 350 employees and is solely focused on serving Business Aviation. 


 


 


How to cut costs and simplify IAM during hard times


By Chris Szorc, Director of IT Engineering for Gogo


In 2020, Gogo was a provider of in-flight broadband internet services for commercial and business aircraft. We were based in Chicago, Illinois with 1,100 employees, and at the time we equipped more than 2,500 commercial and 6,600 business aircraft with onboard Wi-Fi services, including 2Ku, our latest in-flight satellite-based Wi-Fi technology.


 


As we all know, 2020 wasn’t a great year for the airline industry. Last May, the pandemic had drastically shrunk our revenue, forcing the company to cut costs wherever possible. A looming three-year renewal contract with Okta prompted my IT team to consider bringing all our identity and access management (IAM) under the Microsoft umbrella to cut costs and simplify access.


 


Favor security and simplicity


Pulling off a major migration to Microsoft Azure Active Directory (Azure AD)—when the IT team is shorthanded and working remotely—would be a challenge for anyone. For my team, the first consideration was security. We had to protect our PCI (payment card industry) status, as well as the custom apps that we create with our airline partners. We certify ourselves with ISO (International Organization for Standardization), and we pass our SOX (Sarbanes Oxley Act) audits every year. As it happened, Deloitte was reviewing us, so the industry certifications for Azure AD and Microsoft 365 helped maintain our security standing as well. We made sure to get the most from our Microsoft agreement—including all the security tools in the Microsoft Azure tool set.


 


We were already using on-premises Active Directory, but we wanted a hybrid cloud identity model for the seamless single sign-on (SSO) experience for our users and applications. We collaborate with a lot of airlines and contractors; so hybrid access fits our model. Like us, you might see migration as an opportunity to reduce the number of redundant apps in your user base. At Gogo, we went app by app, figuring out how people were using each of them, and we saw that Microsoft could cover data analytics among other business functions, as well as IAM.


 


We were able to further consolidate and simplify by adopting the full Microsoft 365 suite of productivity tools. Microsoft Teams, in particular, was a hit with users. People were working from home because of the pandemic, and discovered they preferred Teams over Skype. Once our people started asking for it, that gave us the green light to roll out Teams companywide as a unified platform for online meetings, document sharing, and more.


 


Make use of vendor support


Times were tough enough already; we couldn’t allow migrating our multifactor authentication from Okta to Azure AD to disrupt workflow. We knew we couldn’t overwhelm our help desk with calls and tickets; so, we chose to make the migration in waves of 100 users at a time.


My advice—take advantage of all the technical support that’s available. After all, it’s not as if you’ll have a complete test environment to train yourself. You have your production identity, domain, and your services—multifactor authentication, conditional access, sign in—and if you don’t do it right, you’re severely impacting people.


 


No matter how qualified your IT team is, there’s a wealth of knowledge that a good vendor can provide. Microsoft FastTrack was included with our Azure AD subscription. We also used Netrix for guidance on bringing the migration in on time. FastTrack helped us know where to put people and how to organize—their entire mission is built around helping you complete a successful migration.


 


FastTrack also helped us untangle previous IAM implementations that were set up before my team was hired. They showed us where Okta Verify could be replaced with the latest best practices in multifactor authentication, enabling us to deliver simplified, up-to-date security with Azure AD. That’s the kind of issue you rarely anticipate during a migration, and it’s one where the right support proves invaluable.


 


Ensure maximum ROI


At Gogo, we’re already enjoying the advantages that come with unifying our IAM for simplicity and maximum return on investment (ROI). Since adopting Teams and other Microsoft 365 apps, we’ve been able to drop other services like Box and Okta—that saves the company money.


 


We’re doing federated sharing with Microsoft Exchange Online, sharing calendars with partner tenants, which has been great for planning meetings. We do entitlement management to set up catalog access packages with expiration policies, to stage workflow and access reviews for vendors and collaborators, rather than give them identities in our Gogo directory.


 


Our IT team seized on migration as an opportunity to implement Azure AD’s self-service password reset feature, which allows users to reset their password without involving the help desk. The decision to simplify your IAM solution will likely pay off in more ways than you can anticipate. We accomplished more than just a migration from Okta to Azure AD; Microsoft helped us streamline our IT services and provided us with direction for future improvements.


 


Learn more


I hope Gogo’s story of undertaking a daunting migration during tough times serves as inspiration for your organization. To learn more about our customers’ experiences, take a look at the other stories in the “Voice of the Customer” series.


 


 


Learn more about Microsoft identity:


Model Lifecycle Management for Azure Digital Twins

Model Lifecycle Management for Azure Digital Twins

This article is contributed. See the original author and article here.

Model Lifecycle Management for Azure Digital Twins


Author – Andy Cross (External), Director of Elastacloud Ltd, a UK based Cloud and Data consultancy
Azure MVP, Microsoft RD.


 


Ten years ago, my business partner Richard Conway and I founded Elastacloud to operate as a consultancy that truly understood the value of the Cloud around data, elasticity and scale; building next generation systems on top of Azure that are innovative and impactful. For the last year, I’ve been leading the build of a Digital Twin based IoT product we call Elastacloud Intelligent Spaces.


 


When working with Azure Digital Twins, customers often ask what the best practice is for managing DTDL Versions. At Elastacloud, we have been working with Azure Digital Twins for some time and I’d like to share the approach we developed to manage our DTDL model lifecycles from .NET 5.0.


 


What is DTDL?


If you are not familiar with Azure Digital Twins and DTDL, Azure Digital Twins is a PaaS service for modelling related data such as you’d often find in real world scenarios. It is a natural fit for IoT projects, since you can model how a sensor relates to a building, to a room, to a carbon intensity metric, to their enclosing electrical circuit, to an owner, to neighboring sensors and their respected metrics, owners, rooms and so on. It is a Graph Database, which focusses on the links that exist in the graph, giving it the edge over more commonly found relational databases, since it features the ability to rapidly and concisely traverse data by its links across a whole data set.


 


Azure Digital Twins adopts the idea that the nodes on the graph (known as Digital Twins) can be typed. This means that the store of Entities that holds the data are in defined sets of shapes that are defined in Digital Twin Definition Language. The definition language allows developers to constrain the data that an entity can store, in a list of contents. These are broadly synonymous with the notion of columns in a traditional relational database. Just like in other database systems, when a development team iterates on a data structure to add a property, edit or remove one, the development team has to consider how to keep the software and the data structure in sync.


 


What is the Version challenge?


Models in DTDL are stored in a JSON format, and therefore typically stored as a .json file. We store these in a git repository right alongside the code that interacts with the data shapes that they define.


 


The key question of the Version Challenge therefore is: “When I update my model definitions in my local dev environment, how do I automatically update the models that are available in Azure Digital Twin?”


 


There is one additional twist, when you want to use a model, for example to create a new digital twin, you have to know the version number of the model that you want to create. This means your software needs to also be kept in sync with your models, and your deployment.


 


In order to keep track of all this, each Azure Digital Twin model has a model identifier. The structure of a Digital Twin Model Identifier (DTMI) is:


 

dtmi:[some:segmented:name];[version]

 


 


For example:


 

dtmi:com:elastacloud:intelligentspaces:room;168

 


 


Our solution then needs to solve these top-level issues, whilst being developer friendly, and fitting into best practice for deployments.


We might consider this ideal workflow:


A developer workflow that includes continuous deployment of DTDL models as described in the text.A developer workflow that includes continuous deployment of DTDL models as described in the text.

Building Blocks


We want to be able to construct our approach to versioning without prejudicing our ability to use the fullness of ADT features. There are a few main options that present themselves to us:



  1. Hold the JSON representation of the DTDL on disk as a file

  2. Build the JSON representation from a software representation (for instance .NET class)


Both of these are valid cases. The JSON representation reflects the on-the-wire payload. The .NET class might give us the ability to later use this class to create instances of the DTDL defined Twin.


 


Considering this idea, we might consider something like the following:


 

{
  "@id": "dtmi:elastacloud:core:NamedTwin;1",
  "@type": "Interface",
  "contents": [
    {
      "@type": "Property",
      "displayName": {
        "en": "name",
        "es": "nombre"
      },
      "name": "name",
      "schema": "string",
      "writable": true
    }
  ],
  "description": {
    "en": "This is a Twin object that holds a name.",
    "es": "Este es un objeto Twin que contiene un nombre."
  },
  "displayName": {
    "en": "Named Twin Object",
    "es": "Objeto Twin con nombre"
  },
  "@context": "dtmi:dtdl:context;2"
}

 


 


We might then want to create a Plain Old CLR Object (POCO) representation:


 

public class NamedTwinModel
{
  public string name { get; set; }
}

 


 


While we are able to see that the Interface is in alignment with the DTDL definition of contents, it is not immediately apparent how we would manage displayName and globalisation concerns thereof within a POCO.


 


Note that from a purist’s perspective, a POCO should try to avoid attributes where possible, to boost readability. So a [DisplayName(“en”, “name”)] annotated approach is possible, but not ideal.


 


Furthermore, you’ll note that the DTDL wraps the contents which is the type definition, with a set of descriptors and globalization values. In order to achieve this, we might consider a wrapped generic POCO approach:


 

public class Globalisation {
   public string En { get; set; }
   public string Es { get; set; }
}
public class DtdlWrapper<TContents> {
    public T Contents { get; set; }
    public Globalisation Description { get; set; }
}
...
var namedDtdl = new DtdlWrapper<NamedTwinModel>();
namedDtdl.Contents = new NamedTwinModel();
namedDtdl.Contents.name = "what should I put here?";

 


 


The problem we start to face when expressing things in this case for the DTDL definitions themselves, is that we are actually building a class hierarchy that is more akin to the Azure Digital Twin instances than it is to the DTDL definitions. As such, we’re going to have to create instances, then use Reflection over them but ignore their values. We could use default values or lookup the types more directly, but still the problem is the same; class definitions in .NET describe how you can create instances, and don’t directly translate to DTDL in an easy to understand way.


 


Thus, from our perspective, we want to make sure that our description DTDL is native json since there are aspects which are not naturally amenable to encapsulating with a Plain Old CLR Object (POCO). We will use our POCOs to represent instances of Azure Digital Twins, i.e. the data itself, and not the schema.


 


This means we store the DTDL in JSON format on disk. But this isn’t anywhere near the end of the story for versioning and .NET development.


 


We just learned that POCOs can represent instances or Digital Twins quite effectively. If we’re going to code with .NET we will still need to use some kind of class to interact with, in order to do CRUD operations on the Azure Digital Twin.


 


The building blocks are therefore:



  • Raw JSON held as a file

  • POCOs to describe instances of those DTDL defined classes



Versioning


Versioning models in DTDL is achieved in a DTMI using an integer value held in the identifier. From the DTDL v2 documentation :


In DTDL, interfaces are versioned by a single version number (positive integer) in the last segment of their identifier. The use of the version number is up to the model author. In some cases, when the model author is working closely with the code that implements and/or consumes the model, any number of changes from version to version may be acceptable. In other cases, when the model author is publishing an interface to be implemented by multiple devices or digital twins or consumed by multiple consumers, compatible changes may be appropriate.


 


Firstly, mapping POCOs to DTDL in the way we have discussed requires that we choose to actively validate against DTDL, passively validate or don’t validate at all. Some options:



  • Active; we build a way to check whether a DTDL model exists in Azure Digital Twins on any CRUD activity, that the properties match in name and type

  • Passive; we do similarly to Active, but use JSON files as the validation target, and assume that the JSON files are in-line with the target database

  • None; we don’t validate, but instead lead Azure Digital Twins error if we get something wrong, and we react to that error.


In our approach, we want to be able to support either radical or compatible changes but we will have to consider some additional factors brought in by .NET type constraints:



  • if a DTDL interface changes types, the .NET POCO properties that exist must match its DTDL values

  • if a DTDL interface changes its named properties, the .NET POCO needs to be updated to reflect this

  • if a DTDL interface adds a new property, we need to decide whether it’s an error or not for the POCO to not have the property. This is a happy problem, as we’re roughly compatible even if we don’t add the property.

  • if the DTDL interface deletes a property, we need to decide whether we do create and update methods but omit that value at runtime.


A workflow that shows the order of checking a Model Existence and the states that it may be in.A workflow that shows the order of checking a Model Existence and the states that it may be in.

Applying Versioning

Once we have our DTDL prepared in JSON, we still need to get these into Azure Digital Twins. We have a few choices again to make around how we want to handle versioning.


 


The absolute core of creating Azure Digital Twins DTDL models from a .NET perspective is to use the Azure.DigitalTwins.Core package available on NuGet, to create the models. In short:


 

// you need to setup three variables first; tenantId, clientId 
// and adtInstanceUrl. var 
credentials = new InteractiveBrowserCredential(tenantId, clientId);
DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
await client.CreateModelsAsync(new string[] { "DTDL Model in JSON here..." } );

 


 


That’s the core of creating those DTDL models. We could just load the JSON files directly from disk as a string and add it to the array passed to CreateModelsAsync, however we have options to employ that might help us out in the future.


 


For example, we can get the existing models by calling client.GetModelsAsync. We can iterate on these models and check whether our new models to create share a @id including the version. If this is the case we can validate whether the contents are the same, and choose to throw an exception if not, if we are seeking to maintain a high level of compatibility.


 


Should we find that a model exists for a previous version (i.e. our JSON file has a higher dtmi version) we can choose to decommission that model. This is a one way operation, so we better be careful to do this in a managed fashion. For instance, we might want to decommission a model only after it has been replaced for a period of time, so that we may have live-updates to the system. If this is the case, we should be comfortable that all writers to the Azure Digital Twin have been upgraded.


 


When a model is decommissioned, new digital twins will no longer be able to be defined by this model. However, existing digital twins may continue to use this model. Once a model is decommissioned, it may not be recommissioned.


Anyway, should we choose to do that, once a model is created (say dtmi:elastacloud:core:NamedTwin;2) we might choose to decommission the previous version:


 

await client.DecommissionModelAsync("dtmi:elastacloud:core:NamedTwin;1");

 


 


The key thought process around Decommissioning relates to the choice you want to make around version compatibility with your code. The idea we take at Elastacloud is that we want to be able to be sure that the latest Git-held version of the DTDL model is available but also that previous versions should also be available for a period of time that we consider to be an SLA, until we are sure that all consumers have been updated to the latest version.

 


A strategy for decommissioning DTDL Models in Azure Digital Twins, shown as a workflow that checks an SLAA strategy for decommissioning DTDL Models in Azure Digital Twins, shown as a workflow that checks an SLA

Other Considerations

Naming standards between .NET and JSON are different. We should name according to the framework that hosts the code, and use Serialization techniques to convert between naming divergences. For example, Properties in .NET start with a capital letter in many circumstances, whereas in JSON they tend to start with lowercase.


 


DTDL includes a set of standard semantic types that can be applied to Telemetries and Properties. When a Telemetry or Property is annotated with one of these semantic types, the unit property must be an instance of the corresponding unit type, and the schema type must be a numeric type (double, float, integer, or long).


.NET Tooling Approach


So far we have a few key components that we have to build in order to hit our best practice goal.



  • A .NET application that deploys the models to the Azure Digital Twin instance. That understands versions of DTDL that are already deployed, and the versions held locally, and helps assert compatibility.

  • A .NET application that holds POCOs that can represent DTDL deployed to Azure Digital Twins and can help marshal data between .NET and Azure Digtal Twins.


This helps us define two main categories of error conditions; deployment and runtime.


A tooling approach to deploying Azure Digital Twin DTDL model changesA tooling approach to deploying Azure Digital Twin DTDL model changes


CI/CD deployment


At Elastacloud we use our own `twinmigration` tool for managing this process. The tool is a dotnet global tool that we built and that provides features designed for CI/CD purposes.


 


Since a dotnet global tool is a convenient way of distributing software into pipelines, we add a task to our CI/CD pipeline that takes the latest version of JSON files from a git repo, and validates them against what is already deployed in an ADT instance.


Following the output of a validation stage, we might choose to also run a deploy stage. This will do the action of adding the models to an Azure Digital Twin.


 


Finally, we have a decommissioning step which causes “older” models to be made unavailable for creation, so that we can keep good data quality practices.


 


In Summary


For more information about what we’re doing with Azure Digital Twins, visit our website at Intelligent Spaces — Elastacloud, we’ll be updating it regularly with information on our approaches. We have some tools that are ready to go, such as NuGet Gallery | Elastacloud.TwinMigration that help you to do the things we’ve described here!


 


Thanks for reading.