This article is contributed. See the original author and article here.
This blog is written by Ian Riley, an inspiring musician, as a part of the Humans of Mixed Reality series. He shares his experience in music and technology, which led him to developing music in mixed reality.
Touching Light is an original musical work for Percussionist and Mixed Reality Environment that explores the border areas between the physical world that we see around us, and the worlds of infinite possibility that each of us holds in our imagination.
“A dream we dream together is called reality.” – Alex Kipman at the Microsoft Ignite Keynote, 2021
Mixed Reality, fundamentally, asks us to see the world differently, something that is so akin to the ways that as performers, we ask our audiences not just to hear, but to listen. By drawing the attention of those around us to something that we believe to be compelling, and even more when we can share something that we have had a hand in creating, we access a unique moment, a shared imaginative space and, in my experience, this is just the sort of thing that users of Mixed Reality are hoping to find.
“My dad’s a computer programmer.” I usually lead with this as it seems to put folks at ease when they contact me, hoping that there is some ‘secret’ for how I, someone with a doctorate in music, not computer science, learned to work with Mixed Reality. Yet, while his influence has certainly been a continual inspiration to me, it was in fact my mother’s encouragement to pursue training in the arts that positioned me to begin developing Touching Light. Despite its deep connectedness to technology, Touching Light is first a foremost a musical MR application.
Music and Technology
It was in pursuit of my master’s degree that I first became deeply interested in music technology. I was fascinated by the sounds that electronic instruments could create, and that curiosity would eventually lead me to perform an all percussion and live electronics final recital during my first graduate degree. This sort of recital was a first for the small college that I was attending and, though I was unaware of this at time, something that is still uncommon in the world of contemporary percussion. Those experiences would eventually lead me to pursue a DMA in Percussion Performance at West Virginia University with a desire to continue to explore and innovate with percussion and live electronics.
When I first started my DMA, I was aware of the work that Microsoft was doing with the HoloLens 1 (introduced in 2016), but it wasn’t until my wife and I moved to Morgantown, West Virginia that I saw the first marketing for the Microsoft HoloLens 2 on February 24th, 2019. I was amazed. Watching it again today still makes me smile, but I guess that’s good marketing for you! As I continued my studies at WVU, I kept thinking about that video, about the HoloLens 2, and about Mixed Reality. What seemed like a pipe dream in February, making music in Mixed Reality, would become a real possibility in mind in November of that same year.
Look toward the future – stop thinking about what is cutting edge right now and to start thinking about the cutting edge of the cutting edge; because that’s where we’re going to need people to do work. – Dr. Norman Weinberg, at PASIC 2019
And I knew that the future was Mixed Reality.
Playing vibraphone while using a holographic audio mixer from Touching Light
Preparing for HoloLens 2
Sometimes it is the mere fact that you know what you don’t know that can provide the clearest path forward. Soon after the reveal of the HoloLens 2 in early 2019, the first seeds of what would eventually become Touching Light began to take root. At the time, while I had done some minimal computer programming experience from high school (Java, and some HTML), since I began studying music in college, I had had little time or reason to engage with the ‘coding’ side of technology apart from some basic formatting for websites.
Knowing that the HoloLens 2 would likely run on something like C# or Visual Basic, I began thinking about other ways that I could engage with code-based music technology and would eventually teach myself how to build rudimentary circuits to trigger lighting and audio effects. Concurrent to this work, I also more fully invested myself into learning about audio recording and engineering, recording and editing my own performance videos from recitals and other concerts. Yet for all this experience, I still didn’t know how to program the HoloLens 2.
Learning Mixed Reality
When the first news of the global coronavirus pandemic entered the public awareness in the United States, it was met by a mixture of genuine concern, reasonable skepticism, and in some cases, outright dismissal. Living in West Virginia, the scope of the pandemic didn’t really hit home until the University received email correspondence from university president outlining the realities of campus closures, and the transition to online delivery for the remainder of the semester as the university endeavored to minimize the risk to the WVU community in the face of uncertain times. In the face of what seemed at the time to be indefinite lockdown, I found myself able to do what anyone would do with a sudden abundance of free time… learn how to code for Mixed Reality!
Over the course of the next several months, particularly during the summer of 2020, through a series of free tutorials, I learned the basics of 3-D modeling using a program called Blender, a modeling engine that is similar in many ways to the sort of interface I would eventually work with in Unity. Upon ordering a HoloLens 2 from Microsoft in early July, I quickly transitioned to Unity while familiarizing myself with the sorts of gestures and interactions that drive the HoloLens 2 holographic interface.
With all the components finally in hand, then began the work of writing, rehearsing, and performing Touching Light. Core to the performative practice of music, and particularly to that of the percussionist the same sorts of interactions that I already employed as a performer would serve as the conceptual framework from which the three ‘dimensions of translucence’ would be derived. These dimensions (modeled after the three coordinate dimensions in physical space) would serve to ground my creative work in the sorts of real decisions that I already knew how to make because of my work with percussion.
Improvising on a marimba in response to a rotating carousel of landscapes
Developing Music in Mixed Reality
I knew that I wanted Touching Light to be mobile. The promise of the HoloLens 2, and Mixed Reality in general, is that there are ‘no strings attached;’ if you wear this device, that is all you need to enter a Mixed Reality environment. I intentionally connected that idea of mobility to the sorts of interactions and environments that the user engages throughout the work. Even Soliloquy, the second movement of Touching Light which features a large carousel of static images, does not extend far beyond the anticipated ‘near-field’ (that which is within reach) that a percussionist will be used to engaging with. Everything in Touching Light, whether virtual or physical follows the design ethos of ‘always being within reach.’
The unique opportunity to engage music-making and Mixed Reality is not something that I take lightly; what began as a pipe dream just over a year ago has had a significant impact on the ways that I engage with both music and technology. I was pleasantly surprised to discover that Mixed Reality is a profoundly creative medium, and as such, engages easily with the process of music-making. From the deeply satisfying manipulation of a standing wave through the miniscule gestures of a rotating hand, to the shocking immersion of a massive holographic carousel slowly rotating around you while you perform, there is something much more connective about the spatial interactions presented by MR than the limitations of peripherals like a mouse and keyboard to control those same musical and visual elements.
Exploring tuned Thai gongs while manipulating spatialized virtual instruments
Making Music in Mixed Reality (How to Get Started, and Why You Should)
Already, so much of what we do as musicians is, within the context of society at large, a niche endeavor; for the percussionist, these degrees of separation can seem even more severe. But in the same ways that we as artists commit ourselves to the craft of music, and the practice of music-making, engaging with MR has only served to deepen those sorts of commitments for me.
For Musicians or (“Performers”)
For those individuals who are interested in the musical side of Mixed Reality, the first step to get your hands on a platform. Touching Light is obviously designed with the Microsoft HoloLens 2 in mind, but similar functionality is available through any number of other VR headsets. Once you have a platform, you will need to decide what you will perform. If you are working with the Microsoft HoloLens 2, a great place to start is with Touching Light! You can download the complete Unity file package here. Follow the instructions from the Microsoft Mixed Reality Documentation, beginning at “1. Build the Unity Project.” Once you have deployed the application to your HoloLens 2, load up the application, and explore!
One of the most profound discoveries that I have made while working with this technology is just how musical it can be. There is something about engaging with technology within the Mixed Reality volume, about ‘spatial computing,’ that seems intuitive and artistic. This simple fact has even more deeply convinced me that music-making in Mixed Reality is not just an interesting possibility, but a deeply meaningful inevitability.
For Programmers (or “Composers”)
For those individuals who may be more interested in the nuts-and-bolts of developing musical applications for Mixed Reality, the first step is to familiarize yourself with a compiler. If you are interested in programming for the Microsoft HoloLens 2, the de facto solution at present is the Unity Development Engine, though support for other compilers is becoming increasingly available. You can download the Unity Hub for free from their website, and then following the instructions in the Microsoft Mixed Reality Documentation, beginning at “1. Introduction to the MRTK tutorials,” you can begin to develop your first Mixed Reality application.
I would strongly advise that, once you get a handle on the basic functionality of the compiler and complete some of the beginning MRTK tutorials, take some time to consider what sorts of functionality you would like your application to demonstrate, the connect with the Microsoft MR community (via Slack or the Microsoft MR Tech Community forums) and connect with other who may be able to answer your questions, and even help you with your project design.
Throughout the development process of Touching Light, I was surprised at not only how easy it was to onboard myself to Mixed Reality development by using the MRTK, but also by how friendly and helpful the then-current MR development community was. Whenever I had a question, or was struggling with some element of implementation, I would quickly be directed to the relevant documentation, YouTube video, or other resource that very often addressed the exact issue I was having without ever need to post snippets of code or consult more directly with someone on the project. As a bonus, I was also able to connect with a handful of individual who had a particular interest in developing creative applications for the HoloLens 2.
Touching Light
I had the distinct opportunity to present Touching Light in a public recital on Saturday, May 1st, 2021.
Only the beginning
Touching Light is only the beginning. It is my sincere hope that this project will serve to orient, assist, and inspire musicians, artists, and audiences alike as we continue to navigate an increasingly digital and virtual existence. Perhaps more than any other time in history, only compounded by the incredible circumstances surrounding global health and the subsequent impact that a response to such scenarios require, we have been forced to think differently about technology, and for those of us who found ourselves suddenly unable to engage in live musical performances, neither as artists nor audiences, it is my conviction that mediums like Mixed Reality will only become more essential to exploring ‘liveness’ within the context of digital and virtual spaces.
The work was designed during the global coronavirus pandemic of 2020-21 and it is my hope that Touching Light reminds each of us that, despite everything, we are never truly alone; there is a world beyond this one if we are only willing to reach out and touch it.
A photo with members of the WVU Percussion Faculty after the recital [from left: Pf. Mark Reilly, Dr. Mike Vercelli, Ian Riley, and Pf. George Willis]
Riley, Ian T. “Touching Light: A Framework for the Facilitation of Music-Making in Mixed Reality.” West Virginia University, West Virginia University Press, 2021.
This article is contributed. See the original author and article here.
The stage is set for the 19thannual Imagine Cup World Championship, taking place duringMicrosoft Build’sdigital experience on May 25.Four finalist teamsfrom across the world are bringing theirinnovations for impact toshowcase globally. Focused on four social good categories – Earth, Education, Healthcare, and Lifestyle – theirideas encompasstheImagine Cup’s mission to empower every student to apply technology to solve issues in their local and global communities.
In the 2021 competition, students reimagined a future through projects guided by accessibility, sustainability, inclusion, equality, and passion. Submitted solutions covered a variety of current issues, includinga 3D sign-language animation, a virtual game to combat social isolation, an early detection platform for Parkinson’s Disease, an intelligent bee keeping system, and more.
On May 25, our four finalists will present their innovations for the chance to take home USD75,000 and mentorship with Microsoft CEO, Satya Nadella. A panel of expert World Championship judges will assess each project. With combined industry and personal experience in diversity leadership, startups, founding businesses, and applying tech for social impact, our judges will apply their knowledge to evaluate the most inclusive and original solution with the potential to make a global difference.
Imagine Cup judges dedicate their personal time and experience to help empower the next generation of developers. We’ve been fortunate to have a diverse panel of industry experts, from around the world, leading up to the World Championship, including Devendra Singh, CTO at PowerSchool, Kai Frazier, Founder at KaiXR, Neil Sebire, Chief Clinical Data Officer at HDK UK, and Jason Goldberg, Chief Commerce Strategy Officer at Publicis, and more.
For the first time in Imagine Cup history, we are pleased to introduce a panel of all women judges for the World Championship. During the competition, each team will pitch their project and demo their technology, followed by questions from judges. Who will take home the trophy? Join our hosts, Tiernan Madorno, Microsoft Business Program Manager, and Donovan Brown, Microsoft Principal Program Manager, and tune into the show on May 25 at 1:30pm PT to find out!
Meet the World Championship judges
Jocelyn Jackson – National Society of Black Engineers National Chair, 2019-2021
Student, researcher, leader, and change agent are just a few descriptors of Jocelyn Jackson. In her final term as the National Chair of the National Society of Black Engineers (NSBE), Jocelyn led NSBE through one of the hardest years it has faced. Through the COVID-19 pandemic as well as the racial injustice reckoning in America, Jocelyn stayed dedicated to using her leadership and voice to make a difference in the lives of other young Black men & women interested in engineering, and to make engineering a more diverse and accepting field for all. As National Chair, Jocelyn made massive strides to accomplish the current strategic goal of NSBE: 10K by 2025, or to graduate 10,000 Black engineers annually by 2025 by launching NSBE’s newest 5 year strategic plan ‘Game Change 2025.’ During her last 3 years at NSBE, Jocelyn managed & led the board of directors to ensure the best overall experience of NSBE stakeholders.
Originally from Davenport, Iowa, Jackson received her bachelor’s and master’s degrees in mechanical engineering at Iowa State University, where her thesis research focused on the development of elastomeric coatings with reduced wear for ice-free applications. She is a second-year doctoral student in Engineering Education Research at the University of Michigan. Her current research works toward advancing equity in STEM and STEM entrepreneurship.
Enhao Li – Co-Founder and CEO of Female Founder School
Enhao Li is the Co-Founder and CEO of Female Founder School. Enhao studied Economics at Harvard and in a former life was an investment banker for fast-growing technology companies – helping to take companies like Pandora public, but she was always itching to be a founder herself. It wasn’t until she finally took the leap and started on her own company did she discover just how unprepared she was; she did all of the wrong things, wasted time and money, only to finally learn that there was a way to do this. Since then, she has become obsessed with learning how to build successful companies from experienced founders and investors and sharing it with new founders. That is where Female Founder School came from – her own personal experiences and a mission to make it easier for anyone especially women to build successful companies of their own.
Toni Townes-Whitley – President, US Regulated Industries, Microsoft As president of US Regulated Industries at Microsoft, Toni Townes-Whitley leads the US sales strategy for driving digital transformation across customers and partners within the public sector and commercial regulated industries. With responsibility for the 4900+ sales organization and ~$15B P&L, she is one of the leading women at Microsoft, and in the technology industry, with a track record for accelerating and sustaining profitable business and building high-performance teams.
Her organization is responsible for executing on Microsoft’s industry strategy and go-to-market for both public sector and regulated industries in the United States, including Education, Financial Services, Government, and Healthcare. In addition to leading a sales organization, Townes-Whitley is helping to steer the company’s work to address systemic racial injustice – with efforts targeted both internally at representation and inclusion; as well as externally at leveraging technology to counter prevailing societal challenges. She has developed expertise and speaks publicly about “Civic Technology”, applying tech innovation for social impact.
——————————–
Don’t miss out on the chance to see which team will win it all at the Imagine Cup World Championship! Plus, as a student at Microsoft Build, you can enhance your own developer skills and prepare to create the next great project. Register at no cost for the Student Zone now.
This article is contributed. See the original author and article here.
Model Lifecycle Management for Azure Digital Twins
Author – Andy Cross (External), Director of Elastacloud Ltd, a UK based Cloud and Data consultancy Azure MVP, Microsoft RD.
Ten years ago, my business partner Richard Conway and I founded Elastacloud to operate as a consultancy that truly understood the value of the Cloud around data, elasticity and scale; building next generation systems on top of Azure that are innovative and impactful. For the last year, I’ve been leading the build of a Digital Twin based IoT product we call Elastacloud Intelligent Spaces.
When working with Azure Digital Twins, customers often ask what the best practice is for managing DTDL Versions. At Elastacloud, we have been working with Azure Digital Twins for some time and I’d like to share the approach we developed to manage our DTDL model lifecycles from .NET 5.0.
What is DTDL?
If you are not familiar with Azure Digital Twins and DTDL, Azure Digital Twins is a PaaS service for modelling related data such as you’d often find in real world scenarios. It is a natural fit for IoT projects, since you can model how a sensor relates to a building, to a room, to a carbon intensity metric, to their enclosing electrical circuit, to an owner, to neighboring sensors and their respected metrics, owners, rooms and so on. It is a Graph Database, which focusses on the links that exist in the graph, giving it the edge over more commonly found relational databases, since it features the ability to rapidly and concisely traverse data by its links across a whole data set.
Azure Digital Twins adopts the idea that the nodes on the graph (known as Digital Twins) can be typed. This means that the store of Entities that holds the data are in defined sets of shapes that are defined in Digital Twin Definition Language. The definition language allows developers to constrain the data that an entity can store, in a list of contents. These are broadly synonymous with the notion of columns in a traditional relational database. Just like in other database systems, when a development team iterates on a data structure to add a property, edit or remove one, the development team has to consider how to keep the software and the data structure in sync.
What is the Version challenge?
Models in DTDL are stored in a JSON format, and therefore typically stored as a .json file. We store these in a git repository right alongside the code that interacts with the data shapes that they define.
The key question of the Version Challenge therefore is: “When I update my model definitions in my local dev environment, how do I automatically update the models that are available in Azure Digital Twin?”
There is one additional twist, when you want to use a model, for example to create a new digital twin, you have to know the version number of the model that you want to create. This means your software needs to also be kept in sync with your models, and your deployment.
In order to keep track of all this, each Azure Digital Twin model has a model identifier. The structure of a Digital Twin Model Identifier (DTMI) is:
dtmi:[some:segmented:name];[version]
For example:
dtmi:com:elastacloud:intelligentspaces:room;168
Our solution then needs to solve these top-level issues, whilst being developer friendly, and fitting into best practice for deployments.
We might consider this ideal workflow:
A developer workflow that includes continuous deployment of DTDL models as described in the text.
Building Blocks
We want to be able to construct our approach to versioning without prejudicing our ability to use the fullness of ADT features. There are a few main options that present themselves to us:
Hold the JSON representation of the DTDL on disk as a file
Build the JSON representation from a software representation (for instance .NET class)
Both of these are valid cases. The JSON representation reflects the on-the-wire payload. The .NET class might give us the ability to later use this class to create instances of the DTDL defined Twin.
Considering this idea, we might consider something like the following:
{
"@id": "dtmi:elastacloud:core:NamedTwin;1",
"@type": "Interface",
"contents": [
{
"@type": "Property",
"displayName": {
"en": "name",
"es": "nombre"
},
"name": "name",
"schema": "string",
"writable": true
}
],
"description": {
"en": "This is a Twin object that holds a name.",
"es": "Este es un objeto Twin que contiene un nombre."
},
"displayName": {
"en": "Named Twin Object",
"es": "Objeto Twin con nombre"
},
"@context": "dtmi:dtdl:context;2"
}
We might then want to create a Plain Old CLR Object (POCO) representation:
public class NamedTwinModel
{
public string name { get; set; }
}
While we are able to see that the Interface is in alignment with the DTDL definition of contents, it is not immediately apparent how we would manage displayName and globalisation concerns thereof within a POCO.
Note that from a purist’s perspective, a POCO should try to avoid attributes where possible, to boost readability. So a [DisplayName(“en”, “name”)] annotated approach is possible, but not ideal.
Furthermore, you’ll note that the DTDL wraps the contents which is the type definition, with a set of descriptors and globalization values. In order to achieve this, we might consider a wrapped generic POCO approach:
public class Globalisation {
public string En { get; set; }
public string Es { get; set; }
}
public class DtdlWrapper<TContents> {
public T Contents { get; set; }
public Globalisation Description { get; set; }
}
...
var namedDtdl = new DtdlWrapper<NamedTwinModel>();
namedDtdl.Contents = new NamedTwinModel();
namedDtdl.Contents.name = "what should I put here?";
The problem we start to face when expressing things in this case for the DTDL definitions themselves, is that we are actually building a class hierarchy that is more akin to the Azure Digital Twin instances than it is to the DTDL definitions. As such, we’re going to have to create instances, then use Reflection over them but ignore their values. We could use default values or lookup the types more directly, but still the problem is the same; class definitions in .NET describe how you can create instances, and don’t directly translate to DTDL in an easy to understand way.
Thus, from our perspective, we want to make sure that our description DTDL is native json since there are aspects which are not naturally amenable to encapsulating with a Plain Old CLR Object (POCO). We will use our POCOs to represent instances of Azure Digital Twins, i.e. the data itself, and not the schema.
This means we store the DTDL in JSON format on disk. But this isn’t anywhere near the end of the story for versioning and .NET development.
We just learned that POCOs can represent instances or Digital Twins quite effectively. If we’re going to code with .NET we will still need to use some kind of class to interact with, in order to do CRUD operations on the Azure Digital Twin.
The building blocks are therefore:
Raw JSON held as a file
POCOs to describe instances of those DTDL defined classes
Versioning
Versioning models in DTDL is achieved in a DTMI using an integer value held in the identifier. From the DTDL v2 documentation :
In DTDL, interfaces are versioned by a single version number (positive integer) in the last segment of their identifier. The use of the version number is up to the model author. In some cases, when the model author is working closely with the code that implements and/or consumes the model, any number of changes from version to version may be acceptable. In other cases, when the model author is publishing an interface to be implemented by multiple devices or digital twins or consumed by multiple consumers, compatible changes may be appropriate.
Firstly, mapping POCOs to DTDL in the way we have discussed requires that we choose to actively validate against DTDL, passively validate or don’t validate at all. Some options:
Active; we build a way to check whether a DTDL model exists in Azure Digital Twins on any CRUD activity, that the properties match in name and type
Passive; we do similarly to Active, but use JSON files as the validation target, and assume that the JSON files are in-line with the target database
None; we don’t validate, but instead lead Azure Digital Twins error if we get something wrong, and we react to that error.
In our approach, we want to be able to support either radical or compatible changes but we will have to consider some additional factors brought in by .NET type constraints:
if a DTDL interface changes types, the .NET POCO properties that exist must match its DTDL values
if a DTDL interface changes its named properties, the .NET POCO needs to be updated to reflect this
if a DTDL interface adds a new property, we need to decide whether it’s an error or not for the POCO to not have the property. This is a happy problem, as we’re roughly compatible even if we don’t add the property.
if the DTDL interface deletes a property, we need to decide whether we do create and update methods but omit that value at runtime.
A workflow that shows the order of checking a Model Existence and the states that it may be in.
Applying Versioning
Once we have our DTDL prepared in JSON, we still need to get these into Azure Digital Twins. We have a few choices again to make around how we want to handle versioning.
The absolute core of creating Azure Digital Twins DTDL models from a .NET perspective is to use the Azure.DigitalTwins.Core package available on NuGet, to create the models. In short:
// you need to setup three variables first; tenantId, clientId
// and adtInstanceUrl. var
credentials = new InteractiveBrowserCredential(tenantId, clientId);
DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
await client.CreateModelsAsync(new string[] { "DTDL Model in JSON here..." } );
That’s the core of creating those DTDL models. We could just load the JSON files directly from disk as a string and add it to the array passed to CreateModelsAsync, however we have options to employ that might help us out in the future.
For example, we can get the existing models by calling client.GetModelsAsync. We can iterate on these models and check whether our new models to create share a @id including the version. If this is the case we can validate whether the contents are the same, and choose to throw an exception if not, if we are seeking to maintain a high level of compatibility.
Should we find that a model exists for a previous version (i.e. our JSON file has a higher dtmi version) we can choose to decommission that model. This is a one way operation, so we better be careful to do this in a managed fashion. For instance, we might want to decommission a model only after it has been replaced for a period of time, so that we may have live-updates to the system. If this is the case, we should be comfortable that all writers to the Azure Digital Twin have been upgraded.
When a model is decommissioned, new digital twins will no longer be able to be defined by this model. However, existing digital twins may continue to use this model. Once a model is decommissioned, it may not be recommissioned.
Anyway, should we choose to do that, once a model is created (say dtmi:elastacloud:core:NamedTwin;2) we might choose to decommission the previous version:
The key thought process around Decommissioning relates to the choice you want to make around version compatibility with your code. The idea we take at Elastacloud is that we want to be able to be sure that the latest Git-held version of the DTDL model is available but also that previous versions should also be available for a period of time that we consider to be an SLA, until we are sure that all consumers have been updated to the latest version.
A strategy for decommissioning DTDL Models in Azure Digital Twins, shown as a workflow that checks an SLA
Other Considerations
Naming standards between .NET and JSON are different. We should name according to the framework that hosts the code, and use Serialization techniques to convert between naming divergences. For example, Properties in .NET start with a capital letter in many circumstances, whereas in JSON they tend to start with lowercase.
DTDL includes a set of standard semantic types that can be applied to Telemetries and Properties. When a Telemetry or Property is annotated with one of these semantic types, the unit property must be an instance of the corresponding unit type, and the schema type must be a numeric type (double, float, integer, or long).
.NET Tooling Approach
So far we have a few key components that we have to build in order to hit our best practice goal.
A .NET application that deploys the models to the Azure Digital Twin instance. That understands versions of DTDL that are already deployed, and the versions held locally, and helps assert compatibility.
A .NET application that holds POCOs that can represent DTDL deployed to Azure Digital Twins and can help marshal data between .NET and Azure Digtal Twins.
This helps us define two main categories of error conditions; deployment and runtime.
A tooling approach to deploying Azure Digital Twin DTDL model changes
CI/CD deployment
At Elastacloud we use our own `twinmigration` tool for managing this process. The tool is a dotnet global tool that we built and that provides features designed for CI/CD purposes.
Since a dotnet global tool is a convenient way of distributing software into pipelines, we add a task to our CI/CD pipeline that takes the latest version of JSON files from a git repo, and validates them against what is already deployed in an ADT instance.
Following the output of a validation stage, we might choose to also run a deploy stage. This will do the action of adding the models to an Azure Digital Twin.
Finally, we have a decommissioning step which causes “older” models to be made unavailable for creation, so that we can keep good data quality practices.
In Summary
For more information about what we’re doing with Azure Digital Twins, visit our website at Intelligent Spaces — Elastacloud, we’ll be updating it regularly with information on our approaches. We have some tools that are ready to go, such as NuGet Gallery | Elastacloud.TwinMigration that help you to do the things we’ve described here!
This article is contributed. See the original author and article here.
Hello! In today’s “Voice of the Customer” blog, Chris Szorc, Director of IT Engineering for Gogo, explains how the company cut costs and streamlined their identity and access management as the pandemic was grounding their airline partners, drying up revenue, and forcing thousands of employees to work remotely. By leveraging their existing Azure subscription, Chris and her IT team were able to migrate thousands of internal and external users to Microsoft Azure Active Directory for simplified, secure access across their enterprise.
Editor’s Note:
This story began in May 2020 when Gogo served both Commercial Aviation and Business Aviation. In December 2020, Gogo’s Commercial Aviation business was sold to Intelsat. As a result, the structure and business model has changed drastically for Gogo, which now has approximately 350 employees and is solely focused on serving Business Aviation.
How to cut costs and simplify IAM during hard times
By Chris Szorc, Director of IT Engineering for Gogo
In 2020, Gogo was a provider of in-flight broadband internet services for commercial and business aircraft. We were based in Chicago, Illinois with 1,100 employees, and at the time we equipped more than 2,500 commercial and 6,600 business aircraft with onboard Wi-Fi services, including 2Ku, our latest in-flight satellite-based Wi-Fi technology.
As we all know, 2020 wasn’t a great year for the airline industry. Last May, the pandemic had drastically shrunk our revenue, forcing the company to cut costs wherever possible. A looming three-year renewal contract with Okta prompted my IT team to consider bringing all our identity and access management (IAM) under the Microsoft umbrella to cut costs and simplify access.
Favor security and simplicity
Pulling off a major migration to Microsoft Azure Active Directory (Azure AD)—when the IT team is shorthanded and working remotely—would be a challenge for anyone. For my team, the first consideration was security. We had to protect our PCI (payment card industry) status, as well as the custom apps that we create with our airline partners. We certify ourselves with ISO (International Organization for Standardization), and we pass our SOX (Sarbanes Oxley Act) audits every year. As it happened, Deloitte was reviewing us, so the industry certifications for Azure AD and Microsoft 365 helped maintain our security standing as well. We made sure to get the most from our Microsoft agreement—including all the security tools in the Microsoft Azure tool set.
We were already using on-premises Active Directory, but we wanted a hybrid cloud identity model for the seamless single sign-on (SSO) experience for our users and applications. We collaborate with a lot of airlines and contractors; so hybrid access fits our model. Like us, you might see migration as an opportunity to reduce the number of redundant apps in your user base. At Gogo, we went app by app, figuring out how people were using each of them, and we saw that Microsoft could cover data analytics among other business functions, as well as IAM.
We were able to further consolidate and simplify by adopting the full Microsoft 365 suite of productivity tools. Microsoft Teams, in particular, was a hit with users. People were working from home because of the pandemic, and discovered they preferred Teams over Skype. Once our people started asking for it, that gave us the green light to roll out Teams companywide as a unified platform for online meetings, document sharing, and more.
Make use of vendor support
Times were tough enough already; we couldn’t allow migrating our multifactor authentication from Okta to Azure AD to disrupt workflow. We knew we couldn’t overwhelm our help desk with calls and tickets; so, we chose to make the migration in waves of 100 users at a time.
My advice—take advantage of all the technical support that’s available. After all, it’s not as if you’ll have a complete test environment to train yourself. You have your production identity, domain, and your services—multifactor authentication, conditional access, sign in—and if you don’t do it right, you’re severely impacting people.
No matter how qualified your IT team is, there’s a wealth of knowledge that a good vendor can provide. Microsoft FastTrack was included with our Azure AD subscription. We also used Netrix for guidance on bringing the migration in on time. FastTrack helped us know where to put people and how to organize—their entire mission is built around helping you complete a successful migration.
FastTrack also helped us untangle previous IAM implementations that were set up before my team was hired. They showed us where Okta Verify could be replaced with the latest best practices in multifactor authentication, enabling us to deliver simplified, up-to-date security with Azure AD. That’s the kind of issue you rarely anticipate during a migration, and it’s one where the right support proves invaluable.
Ensure maximum ROI
At Gogo, we’re already enjoying the advantages that come with unifying our IAM for simplicity and maximum return on investment (ROI). Since adopting Teams and other Microsoft 365 apps, we’ve been able to drop other services like Box and Okta—that saves the company money.
We’re doing federated sharing with Microsoft Exchange Online, sharing calendars with partner tenants, which has been great for planning meetings. We do entitlement management to set up catalog access packages with expiration policies, to stage workflow and access reviews for vendors and collaborators, rather than give them identities in our Gogo directory.
Our IT team seized on migration as an opportunity to implement Azure AD’s self-service password reset feature, which allows users to reset their password without involving the help desk. The decision to simplify your IAM solution will likely pay off in more ways than you can anticipate. We accomplished more than just a migration from Okta to Azure AD; Microsoft helped us streamline our IT services and provided us with direction for future improvements.
Learn more
I hope Gogo’s story of undertaking a daunting migration during tough times serves as inspiration for your organization. To learn more about our customers’ experiences, take a look at the other stories in the “Voice of the Customer” series.
This article is contributed. See the original author and article here.
Exim has released a security update to address multiple vulnerabilities in Exim versions prior to 4.94.2. A remote attacker could exploit some of these vulnerabilities to take control of an affected system.
CISA encourages users and administrators to review the Exim 4.94.2 update page and apply the necessary update. CISA also encourages users and administrators to review Center for Internet Security Advisory 2021-064 for more information.
Recent Comments