Easily find anomalies in incidents and alerts

Easily find anomalies in incidents and alerts

This article is contributed. See the original author and article here.

Microsoft 365 security Home page and Incidents page now include a trend graph of all the incidents and alerts over the last 24 hours.


This enables you to easily find spikes in your environment and tell if there anything abnormal happening.


 


Idan_Pelleg_0-1620584251047.png


 


The new incidents trend graph view will also allow you to determine if there are several alerts for a single incident or that your organization is under attack with several different incidents.


 


For example, a will usually generate a lot of alerts in your organization and all of them will be related to the same incident. Seeing that there are hundreds of alerts over time related to the same incident can help you understand that there is an emerging attack that is growing so that you can prioritize your incident response.


 


For more information on investigating incidents, see Investigate incidents in Microsoft 365 Defender.


 

Project 15 from Microsoft – A Story in Five Parts

Project 15 from Microsoft – A Story in Five Parts

This article is contributed. See the original author and article here.

MicrosoftTeams-image (8).png


 


 


Today, May 10, 2021, we launch the second Project 15 from Microsoft video. The next chapter of a story that started with asking the question, “What if?” 


I’m religiously anti-spoiler, so I will pause for you to watch the video.


 


 


But if you are asking yourself, “the second video?” let’s back up two years, and I will tell you a short history that brings us to today. Spoiler alert, it involves a first video.


 


Daisuke Nakahara at the filming of the first Project 15 video - September 2019Daisuke Nakahara at the filming of the first Project 15 video – September 2019


Part 1: What is Project 15 from Microsoft?


Project 15 started two years ago on a “what if” that Daisuke and I shared. What if we could figure out a way to connect our commercial IoT solutioning world to the scientific developer community? Could we apply our processes to bring our partner solutions to scale in a new realm, rather than reinventing wheels that were stalling projects and/or wasting grant money? If we connected each other’s worlds, we could share knowledge and accelerate desperately needed solutions for our planet. 


 


 


We asked others to join us on this adventure in learning and growth mindset. As we started our journey, Daisuke and I were unaware of most use cases in conservation, but we were willing to listen and learn to try to find the places we overlap. 


 


Dr. Eric Dinerstein, Director of Biodiversity and Wildlife Solutions Program at RESOLVE was my first mentor. Eric gave me a crash course on anti-poaching in midnight emails while he was filming an episode of Robert Downey Jr.’s YouTube series, The Age of A.I.: Saving the world one algorithm at a time, in The Mara. Eric has an incredibly rich history in conservation. It’s almost beyond words so I will let this picture tell the story. In the photo below, Eric is collaring the first Rhino in 1986. Eric explained to me, “The rhino was sedated by a trained vet and who was an expert on immobilization of wild animals as being the head vet at the Kathmandu Zoo. When we injected the antidote (antagonist) in the rhino’s ear vein, it was on its feet in 30 seconds!” If you look closely, you can see curious elephants in the background.


 


Dr. Eric Dinerstein collaring the first Rhino at Kathmandu Zoo - c. 1986Dr. Eric Dinerstein collaring the first Rhino at Kathmandu Zoo – c. 1986


 


With Eric’s encouragement, I landed on a promise: that I may not know how to collar a Rhino, but I would gather an army of developers like me who could help him and his friends save the planet. The more I learned, the more I realized that we are out of time for talking about conservation. We all need to get involved in whatever way we can to save endangered animals. Project 15’s name was derived from the statistic that we lose an elephant from the planet every 15 minutes, a fact that Eric taught me.


 


The first mission of Project 15 was to ring a bell. Not that Daisuke and I had any idea what that meant yet, but we had to try. 


 


The first Project 15 "napkin drawing" - April 2019The first Project 15 “napkin drawing” – April 2019


 


I pitched Lucas Joppa, Chief Environmental Officer at Microsoft by email. He, like Eric, didn’t think it was an outlandish idea either and joined us for our first video.


 


Lucas Joppa, Chief Environmental Officer at Microsoft with Sarah on filming day - September 2019Lucas Joppa, Chief Environmental Officer at Microsoft with Sarah on filming day – September 2019


 


Eric was the first mentor, and soon, there were many more. We wouldn’t be here today if it weren’t for the space we were given to learn and the grace to be wrong and “try again.”


 


Part 2: The Phone Rings


 


The first call.


It took three months before the phone rang. By phone, I mean email. On January 3, 2020, we received an email from Bastiaan den Braber with a company called Zambezi Partners. Bastiaan found the video, our web page, and wrote us. It was a cold pitch from him explaining his business, and I don’t think he thought he would get a direct reply from me.


p15-zambezi-partners-logo.png


Zambezi Partners is a start-up focused on conservation with a partner ecosystem. The group wanted to build a platform to grow into a professional services firm that focused on conservation IoT as well as other sustainability use cases.


 


This was not who we thought would call. We didn’t know there were companies that wanted to focus on building IoT solutions in this area. This was interesting.  


 


The second call.


Call number two came from Sonam Tashi Lama, Program Coordinator of the Red Panda Network. Sonam had been forwarded the Project 15 video and asked if we could help. Did you know that Red Pandas were discovered before Giant Pandas? They aren’t related.


 


p15-redpandanetwork.png


 


We spent our nights meeting with Sonam, who is located at the base of the Himalayas in Tibet, to create what would be the digital transformation of the Red Panda Network.  


 


The third call was Yoko Watanabe, Global Manager of the GEF Small Grants Programme implemented by the United Nations Development Program.


 


p15-sgp.png


 


Yoko heard about Project 15 and asked for me and Daisuke to meet with her team.  We knew that our engagement model was working as we were moving right along with our Red Panda Network project and were working with Zambezi Partners on their business model workshop and architectural design session, which is where we create the architecture of solutions on Azure with a partner.  


 


Did I mention we did this on nights and weekends?  Which is fine if you have a couple of projects.


 


The GEF Small Grants Programme has over 3,500 projects currently funded in a range of sustainability areas of focus from conservation of species to urban sustainability. They had 25,000 projects historically


 


The goals we established with Yoko and her team were:



  • Learn each other’s business language and processes

  • Discover the patterns that match commercial solution engagements

  • Design the engagement model for scale


Yoko proceeded to identify three projects for us to start with.


 


Part 3: Getting to Scale, the Project 15 Open Platform is born


 


So, to level set, by February 2020, Project 15 was just the two of us. A start-up named Zambezi Partners. An NGO saving red pandas. And, the UNDP’s GEF Small Grants Programme with their thousands of projects.


 


This is a scale problem to solve. 


 


We started with the three projects from Yoko’s team: two in St. Lucia and one in Panama. We realized the developers didn’t need to learn the IoT “plumbing” every time, nor did they want to. Daisuke’s metaphor of not needing to know how to build a piano to write music is spot on. 


 


p15-open-platform.png


 


The next “what if” was to build 80% of the IoT infrastructure that these solutions had in common and put it on GitHub. In building the Project 15 Open Platform, we wanted to spin up a solution that enabled scientific developers to push a button and share it with the open-source community and universities to leverage. 


 


Part 4: Elephants, Graphs, and Platform Zero by Zambezi Partners – oh my


 


Two years ago, when Project 15 began, I started to apply graph theory to sustainability and conservation. 


 


The first “napkin drawing” of the Sustainability Graph - September 2019The first “napkin drawing” of the Sustainability Graph – September 2019


 


About six months ago, Azure Digital Twins (ADT) became graph-based. 


 


Daisuke updated the Project 15 Open Platform with ADT to give developers a choice to spin up the Azure Digital Twins version. It is a simple flag on the template that launches picking “true” for Azure Digital Twins.


 


Long story, short, Zambezi Partners took the graph version of the Project 15 Open Platform and commercialized it to make Platform Zero. We had already designed their system with them and one of their device partners. It was based on IoT Hub and other PaaS services in Azure. But when I was talking about my graph theory and then Azure Digital Twins became graph enabled, Bastiaan saw the potential immediately. 


 


Sustainability Graph : Animal Conservation Sub-Graph v1.0Sustainability Graph : Animal Conservation Sub-Graph v1.0


 


A Graph model is a more sophisticated way to model systems in conservation. By putting the devices in this graph there is more awareness around the relationship of devices within an environment and the capabilities to model entities like the animals themselves. Recently I wrote a LinkedIn article that walks through the Sustainability Graph using the sub-graph of animal conservation as I describe here in more detail.  


 


What is so exciting about the approach that Platform Zero is taking by putting conservation up on a graph is that it enables the crossing of the gap between park-side and justice-side systems and is prepared to integrate complex processes: the processes of protecting an animal with connected devices and the processes of detecting animal trafficking.


 


Part 5: Who is Kate Gilman Williams?


 


Kate is the founder of Kids Can Save Animals. At the age of nine, she co-authored a book, “Let’s Go on Safari,” to raise awareness and advocate for animals bringing attention to issues like poaching and human wildlife conflict.  


 


kcsa-black.png


 


Kate became aware of Project 15 from Microsoft and reached out to me on Instagram. She had an idea, and it was a good one. Should you meet her, you will hear the urgency in her voice. She will tell you that we are out of time. She will stress to you that there will be no elephants by the time she is out of college at the rate they are disappearing. She will inspire you to act.


 


Kate said, “What if I could break the timeline? What if I could build an army of kids that can learn to use tools like the Project 15 Open Platform and fight for the planet?” 


 


Kate Gilman Williams at the Care For Wild Rhino SanctuaryKate Gilman Williams at the Care For Wild Rhino Sanctuary


 


She made three very important points in her pitch:
              1. It will fall on her generation to fix the Earth.
              2. “If you wait until I’m older to teach me, there will be nothing left to save.”
              3. Given the opportunity, my generation will be part of the solution.


 


Her pitch was to make a learning club, called Club 15, that she could use as a platform to teach her generation technology through applied use cases in conservation and sustainability. Could I help her architect such a club?  


 


The challenge was, of course, she wanted to create a club that she, herself, needed to join. It’s a Catch 22. So, I designed a game. I would play a kid in the future that had been a member of Club 15 and built a time machine to go back in time to teach her how to build the club so I could exist. 


 


Kate was “in.” To teach her the concepts of IoT and ML, I started with the IoT Learning Path developed by our IoT Developer Advocacy team. Kate ordered a device and started to code. She then added a new element to her club, GitHub. She asked if we could use something like the Animal Detection System lab from Module 3 of the IoT Learning Path designed by Henk Boelman. 


 


After she was up to speed on the concepts of Azure and IoT, we had an Architectural Design Session for what Club 15 would look like.


 


Club 15 from Kids Can Save Animals FrameworkClub 15 from Kids Can Save Animals Framework


 


Kate uses 15-minute interviews with experts from three categories of professionals to teach concepts: 1) Scientists double-click on a conservation topic and dig into how technology is used within their specialty; 2) Technologists expand on a tech topic; and 3) Advocates that may be working in other non-scientific fields or non-technical fields discuss finding interesting ways to weave advocacy into their lives and work.


 


p15-kcsa-format-overview.png


 


Inspired by the Microsoft focus pillars of sustainability, Kate designed four clubhouses each with a sustainability focus topic: Biodiversity, Water, Waste, and Energy/CO2. The first Club 15 Clubhouse releases today, May 10th, focusing on Biodiversity and Machine Learning with the next one landing in the Fall of 2021 focusing on Water and Sensors. 


 


We worked with Paul DeCarlo and Henk Boelman from our IoT Developer Advocacy group to contribute to Kate’s first lab to teach how to use Custom Vision. As her project grows, other technology partners will follow our example to contribute more learning labs.


 


Club 15 from Kids Can Save Animals is remarkable in that it is designed to speak to a spectrum of learners: From the tech side, learning the conservation use cases from advocates and scientists; for advocates, learning more about technology concepts and how they are applied to conservation and sustainability. Everyone is welcome.


 


Joining Kate in her launch of Club 15 are some incredible guests that will share their knowledge.



And the amazing elephants at the Sheldrick Wildlife Trust!


 


Kate with the orphans at the Sheldrick Wildlife TrustKate with the orphans at the Sheldrick Wildlife Trust


 


The Butterfly Effect of Innovation


 


You never know where an idea will come from. You never know where it will take you. Project 15, if you follow it all the way back, starts with my cat Thomas and me rescuing her from a burning building. That moment in time led to Project Edison, which in turn led to Project 15. 


 


Every moment of our lives is created by countless events. Sometimes the conundrum is that some events may be regretful or painful but without them, you wouldn’t be where you are today. The moment we are in with the Earth was created by countless events. It’s dire and I would be remiss to not say that here. 


 


But I’ll tell you a secret I have learned. With all the bad news and the terrible statistics that we can drown in if we aren’t careful, there is a discovery down the path if you choose to follow it. Hope. There is so much hope in this solutioning community. Together, we can fix this place.


 


There have been many incredible people who have joined us on the Project 15 journey, the ‘Friends of Project 15.’ Daisuke and I are excited to continue with Yoko and her team at GEF Small Grants Programme implemented by the UNDP as we work to unlock scale. Each day we work with our partners to innovate on IoT solutions for sustainability from smart cities to smart manufacturing to smart farms. “Smart” very often now becoming interchangeable with “Sustainable”.


 


Today, Daisuke and I pass the “what if” baton that was the original spirit of Project 15 to Kate Gilman Williams and Club 15 from Kids Can Save Animals.


 


What if you could make a club and asked everyone to join? You never know, kid… It just might work. 


 


p15-icon.png


 

New Postgres superpowers in Hyperscale (Citus) with Citus 10

New Postgres superpowers in Hyperscale (Citus) with Citus 10

This article is contributed. See the original author and article here.

PostgreSQL is an excellent database for a wide range of workloads. Traditionally, the only problem with Postgres is that it is limited to a single machine. If you are using the Azure Database for PostgreSQL managed service, that limitation no longer applies to you because you can use the built-in Hyperscale (Citus) option—to transparently shard and scale out both transactional and analytical workloads. And Hyperscale (Citus) just keeps getting better and better.


 


The heart of Hyperscale (Citus) is the open source Citus extension which extends Postgres with distributed database superpowers. Every few months we release a new version of Citus. I’m excited to tell you that the latest release, Citus 10, is now available in preview on Hyperscale (Citus) and comes with spectacular new capabilities:


 



  • Columnar storage for Postgres: Compress your PostgreSQL and Hyperscale (Citus) tables to reduce storage cost and speed up your analytical queries!

  • Sharding on a single Citus node (Basic Tier): With Basic Tier, you can shard Postgres on a single node, so your application is “scale-out ready”. Also handy for trying out Hyperscale (Citus) at a much lower price point, starting at $0.27 USD/hour.[1]

  • Joins and foreign keys between local PostgreSQL tables and Citus tables: Mix and match PostgreSQL and Hyperscale (Citus) tables with foreign keys and joins.

  • Function to change the way your tables are distributed: Redistribute your tables in a single step using new alter table functions.

  • Much more: Better naming, improved SQL & DDL support, simplified operations.


 


These new Citus 10 capabilities change what Hyperscale (Citus) can do for you in some fundamental (and useful) ways.


 


With Citus 10, Hyperscale (Citus) is no longer just about sharding Postgres: you can use the new Citus columnar storage feature to compress large data sets. And Citus is no longer just about multi-node clusters: with Basic Tier in Hyperscale (Citus), you can now shard on a single node to be “scale-out-ready”. Finally, Hyperscale (Citus) is no longer just about transforming Postgres into a distributed database: you can now mix regular (local) Postgres tables and distributed tables in the same Postgres database.


 


In short, Hyperscale (Citus) in Azure Database for PostgreSQL now empowers you to run Postgres at any scale.


 


Let’s dive in!


 


One of our favourite Postgres memorabilia is the PostgreSQL 9.2 race car poster with the signatures of all the committers from the PGCon auction in 2013. Since Citus 9.2, our open source team has been creating a new racecar image for each new Citus open source release. With Citus 10 giving you columnar, single node (Basic tier), & so much more, the Postgres elephant can now go to any scale!One of our favourite Postgres memorabilia is the PostgreSQL 9.2 race car poster with the signatures of all the committers from the PGCon auction in 2013. Since Citus 9.2, our open source team has been creating a new racecar image for each new Citus open source release. With Citus 10 giving you columnar, single node (Basic tier), & so much more, the Postgres elephant can now go to any scale!


 


Columnar storage for PostgreSQL with Hyperscale (Citus)


 


The data sizes of some new Hyperscale (Citus) customers are truly gigantic, which meant we needed a way to lower storage cost and get more out of the hardware. That is why we implemented columnar storage for Citus. Citus Columnar can give you compression ratios of 3x-10x or more, and even greater I/O reductions. The new Citus columnar feature is available in:


 



  • Citus 10 open source: you can download the latest Citus packages here

  • Hyperscale (Citus) in Azure Database for PostgreSQL: at the time of writing, the Citus 10 features are in preview in Hyperscale (Citus). So if you want to try out the new Citus columnar feature, you’ll want to turn the preview features on in the portal when provisioning a new Hyperscale (Citus) server group. Of course, depending on when you read this blog post, these Citus 10 features might already be GA in Hyperscale (Citus).


 


The best part: you can use columnar in Hyperscale (Citus) with or without the Citus scale-out features! More details about columnar table storage can be found in our Hyperscale (Citus) docs.


 


Our Citus engineering team has a long history with columnar storage in PostgreSQL, as we originally developed the cstore_fdw extension which offered columnar storage via the foreign data wrapper (fdw) API. PostgreSQL 12 introduced “table access methods”, which allows extensions to define custom storage formats in a much more native way.


 


Citus makes columnar storage available in PostgreSQL via the table access method APIs, which means that you can now create Citus columnar tables by simply adding USING columnar when creating a table:


 

CREATE TABLE order_history (…) USING columnar;

 


If you provision a row-based (“heap”) table that you’d like to later convert to columnar, you can do that too, using the alter_table_set_access_method function:


 

-- compress a table using columnar storage (indexes are dropped)
SELECT alter_table_set_access_method('orders_2019', 'columnar');

 


When you use Citus columnar storage, you will typically see a 60-90% reduction in data size. In addition, Citus columnar will only read the columns used in the SQL query. This can give dramatic speed ups for I/O bound queries, and a big reduction in storage cost.


 


Compared to cstore_fdw, Citus columnar has a better compression ratio thanks to zstd compression. Citus columnar also supports rollback, streaming replication, archival, and pg_upgrade.


 


There are still a few limitations with Citus columnar to be aware of: Indexes and update/delete are not yet supported, and it is best to avoid single-row inserts, since compression only works well in batches. We plan to address these limitations in future Citus releases, but you can also avoid them using partitioning.


 


If you partition time series tables by time, you can use row-based storage for recent partitions to enable single-row, update/delete/upsert and indexes—while using columnar storage to archive data that is no longer changing. To make this easy, we also added a function to compress all your old partitions in one go:


 

-- compress all partitions older than 7 days
CALL alter_old_partitions_set_access_method('order_history', now() – interval '7 days', 'columnar');

 


This procedure commits after every partition to release locks as quickly as possible. You can use pg_cron to run this new alter function as a nightly compression job.


 


To learn more, check out Jeff Davis’ blog post: Citus 10 brings columnar compression to Postgres. Jeff also created a video demo, if you’re a more visual person this Citus columnar demo might be a good way to get acquainted.  


 


Starting with Basic Tier in Hyperscale (Citus)—to be “scale-out ready”


 


We often think of Hyperscale (Citus) as “worry-free Postgres”, because Citus takes away the one concern you may have when choosing Postgres as your database: reaching the limits of a single node. However, when you migrate a complex application from Postgres to Hyperscale (Citus), you may need to make some changes to your application to handle restrictions around unique- and foreign key-constraints and joins, since not every PostgreSQL feature has an efficient distributed implementation.


 


In Azure, the easiest way to scale your application on Postgres without ever facing the cost of migration (and be truly worry-free) is to use Hyperscale (Citus) from day one, when you first build your application. Applications built on Citus are always 100% compatible with regular PostgreSQL, so there is no risk of lock-in. The only downside of starting on Hyperscale (Citus) so far was the cost and complexity of running a distributed database cluster, but this changes in Citus 10. With Citus 10 and the new Basic tier in Hyperscale (Citus), you can now shard your Postgres tables on a single Citus node to make your database “scale-out-ready”.


 


To get started with Hyperscale (Citus) on a single node, this post about sharding Postgres with Basic tier is a good place to start. Be sure to enable preview features in the Azure portal when provisioning Azure Database for PostgreSQL—and then select the new “Basic Tier” feature that’s available in preview on Hyperscale (Citus). As of today, you can provision Basic tier for $0.27 USD/hour in US East 1. This means that you can try out Hyperscale (Citus) at a much lower price point: about ~8 hours of kicking the tires and you’ll only pay $2-3 USD.


 


Diagram 1: When provisioning the Hyperscale (Citus) deployment option in the Azure portal for Azure Database for PostgreSQL, you’ll now have two choices: Basic Tier and Standard Tier.Diagram 1: When provisioning the Hyperscale (Citus) deployment option in the Azure portal for Azure Database for PostgreSQL, you’ll now have two choices: Basic Tier and Standard Tier.


 


Once connected, you can create your first distributed table by running the following commands:


 

CREATE TABLE data (key text primary key, value jsonb not null);
SELECT create_distributed_table('data', 'key');

 


The create_distributed_table function will divide the table across 32 (hidden) shards that can be moved to new nodes when a single node is no longer sufficient.


 


You may experience some overhead from distributed query planning, but you will also see benefits from multi-shard queries being parallelized across cores. You can also make distributed, columnar tables to take advantage of both I/O and storage reduction and parallelism.


 


The biggest advantage of distributing Postgres tables with Basic tier in Hyperscale (Citus) is that your database will be ready to be scaled out using the Citus shard rebalancer.


 


Joins & foreign keys between PostgreSQL and Citus tables


 


With the new Basic Tier feature in Hyperscale (Citus) and the shard rebalancer, you can be ready to scale out by distributing your tables. However, distributing tables does involve certain trade-offs, such as extra network round trips when querying shards on worker nodes, and a few unsupported SQL features.


 


If you have a very large Postgres table and a data-intensive workload (e.g. the frequently-queried part of the table exceeds memory), then the performance gains from distributing the table over multiple nodes with Hyperscale (Citus) will vastly outweigh any downsides. However, if most of your other Postgres tables are small, then you might end up having to make additional changes without much additional benefit.


 


A simple solution would be to not distribute the smaller tables at all. In most Hyperscale (Citus) deployments, your application connects to a single coordinator node (which is usually sufficient), and the coordinator is a fully functional PostgreSQL node. That means you could organize your database as follows:


 



  • convert large tables into Citus distributed tables,

  • convert smaller tables that frequently JOIN with distributed tables into reference tables,

  • convert smaller tables that have foreign keys from distributed tables into reference tables,

  • keep all other tables as regular PostgreSQL tables local to the coordinator.


 


Diagram 2: Example of a data model where the really large table (clicks) is distributed. Because the Clicks table has a foreign key to Ads, we turn Ads into a reference table. Ads also has foreign keys to other tables, but we can keep those other tables (Campaigns, Publishers, Advertisers) as local tables on the coordinator.Diagram 2: Example of a data model where the really large table (clicks) is distributed. Because the Clicks table has a foreign key to Ads, we turn Ads into a reference table. Ads also has foreign keys to other tables, but we can keep those other tables (Campaigns, Publishers, Advertisers) as local tables on the coordinator.


 


That way, you can scale out CPU, memory, and I/O where you need it. And minimize application changes and other trade-offs where you don’t. To make this model work seamlessly, Citus 10 adds support for 2 important features:


 



  • foreign keys between local tables and reference tables

  • direct joins between local tables and distributed tables


 


With these new Citus 10 features in Hyperscale (Citus), you can mix and match PostgreSQL tables and Citus tables to get the best of both worlds without having to separate them in your data model.


 


Alter all the things!


 


When you distribute a Postgres table with Hyperscale (Citus), choosing your distribution column is an important step, since the distribution column (sometimes called the sharding key) determines which constraints you can create, how (fast) you can join tables, and more.


 


Citus 10 adds the alter_distributed_table function so you can change the distribution column, shard count, and co-location of a distributed table. This blog post walks through how when and why to use alter_distributed_table with Hyperscale (Citus).


 

-- change the distribution column to customer_id
SELECT alter_distributed_table('orders',
                               distribution_column := 'customer_id'); 

-- change the shard count to 120
SELECT alter_distributed_table('orders',
                               shard_count := 120);

-- Co-locate with another table
SELECT alter_distributed_table('orders',
                               distribution_column := 'product_id', 
                               colocate_with := 'products');

 


Internally, alter_distributed_table reshuffles the data between the worker nodes, which means it is fast and works well on very large tables. We expect this makes it much easier to experiment with distributing your tables without having to reload your data.


 


You can also use the alter_distributed_table function in production (it’s fully transactional!), but you do need to (1) make sure that you have enough disk space to store the table several times, and (2) make sure that your application can tolerate blocking all writes to the table for a while.


 


Many other features in Citus 10—now available in preview in Hyperscale (Citus)


 


And there’s more!



  • DDL support
    More DDL commands work seamlessly on distributed Citus tables, including CREATE STATISTICS, ALTER TABLE .. SET LOGGED, and ALTER SCHEMA .. RENAME.


  • SQL support
    Correlated subqueries can now be used in the SELECT part of the query, as long as the distributed tables are joined by their distribution column.


  • New views to see the state of your cluster: citus_tables and citus_shards
    citus_tables view shows Citus tables and their distribution column, total size, and access method. The citus_shards view shows the names, locations, and sizes of individual shards.


 


Two easy ways to start playing with Citus 10


 


If you are as excited as we are and want to play with these new Citus 10 features, doing so is now easier than ever.


 



  1. The Basic Tier in Hyperscale (Citus) makes it very cheap to get started with a managed Citus node in our Azure Database for PostgreSQL managed service. (There’s a Basic tier Quickstart in our Azure docs, too.) 


  2. And you can also run Citus open source on your laptop as a single Docker container! Not only is the single docker run command an easy way to try out Citus—it gives you functional parity between your local dev machine and using Citus in the cloud.


 

# run PostgreSQL with Citus on port 5500
docker run -d --name citus -p 5500:5432 -e POSTGRES_PASSWORD=mypassword citusdata/citus

# connect
psql -U postgres -d postgres -h localhost -p 5500

 


You can also check out our lovely new Getting started with Citus page for more resources on how to get started—my teammates have curated some good learning tools there, whether your preferred learning mode is reading, watching, or doing.


 


More deep-dive blog posts about new Citus 10 capabilities


 


And since the Citus 10 open source release rolled out, we’ve also published a bunch of deep-dive blog posts (plus a demo!) about the spectacular new capabilities in Citus 10:


 



 


Finally, a big thank you to all of you who use Hyperscale (Citus) to scale out Postgres and who have taken the time to give feedback and be part of our journey. If you’ve filed issues on GitHub, submitted PRs, talked to our @citusdata or @AzureDBPostgres team on Twitter, signed up for our monthly Citus technical newsletter, or joined our Citus Public community Q&A… well, thank you. And please, keep the feedback coming. You can always reach our product team via the Ask Azure DB for PostgreSQL email address too.

We can’t wait to see what you do with the new Citus 10 features in Hyperscale (Citus)!





Footnotes



  1. As of the time of publication, in the East US region on Azure, the cost of a Hyperscale (Citus) Basic tier with 2 vCores, 8 GiB total memory, and 128 GiB of storage on the coordinator node is $0.27/hour or ~$200/month


Automate Incident Assignment with Shifts for Teams

Automate Incident Assignment with Shifts for Teams

This article is contributed. See the original author and article here.

Azure Sentinel Incidents contain detection details which enable security analysts to investigate using a graph view and gain deep insights into related entities. The responsiveness of a security analyst towards the triggered incidents (also known as Mean Time To Acknowledge – MTTA) is crucial as being able to respond to a security incident quickly and efficiently will reduce the incident impact and mitigate the security threats.


 


The newly introduced Automation Rules allow you to automatically assign incidents to an owner with the built-in action. This is extremely useful when you need to assign specific incidents to a dedicated SME. It will reduce the time of acknowledgement and ensure accountability for each incident.


 


However, some organizations have a group of analysts working on different shift schedules and required the ability to assign an incident to an analyst automatically based on the working schedule to improve the MTTA.


 


In this blog, I will discuss how to extend the incident assignment capability in Azure Sentinel by using a Playbook to rotate user assignments based on shift schedules. Plus, I will also discuss how you could manage incident assignments for multiple support groups at the end of the blog.


 


 


Considerations and design decisions


 


Before we dive into the Playbook, let’s discuss some of the important points taken into consideration and the design decisions when implementing this incident assignment Playbook.


 



  • Scheduling tool

    • Shifts for Teams is used as the scheduling tool because it is available as part of the Microsoft Teams and it provides the ability to create and manage employee schedules.

    • It is easier to automate incident assignment when there is a centralized schedule management tool to keep track of employees’ timesheet or availability.




 



  • Assignment criteria

    • The goal is to assign the incidents equally across all analysts. Hence, the analyst with the least number of incidents in current shift will be assigned first.

    • We also need to consider the average time a security analyst takes to resolve a security incident (also known as Mean Time To Resolve – MTTR). In this Playbook, I have set a default value of 1 hour as the MTTR (a configurable variable) and I am using it as a condition where a security analyst must have at least 1 hour remaining in the shift to be eligible for incident assignment. For example, if a security analyst is about to go off shift in 30 minutes, the incident won’t be assigned to that analyst as the remaining time is less than the default value of 1 hour.




 



  • Notification

    • It is important to notify the assignee when an incident is being assigned.

    • In this Playbook, an email will be sent to the assignee and a comment will be added to the incident on the incident assignment.




 


 


What is Shifts for Teams?


 


Shifts is a schedule management application in Microsoft Teams that helps you create, update, and manage schedules for your team. Shifts is enabled by default for all Teams users in your organization. You can add Shifts app to your Teams menu by clicking on the ellipses (…) and select Shifts from the app list.


 


Shifts3.png


 


The first step to get started in Shifts is to populate schedules for your team. You can either create a schedule from scratch (create for yourself or on behalf of your team members) or import an existing one from Excel. In terms of permission, you need to be an Owner of the team to create the schedule. The schedules will not be visible to your team members until you publish it by clicking “Share with team” button.


 


Here is an example of how a Shifts schedule looks like. If you’re an owner of multiple teams, you can toggle between different Shifts schedules to manage them.


 


pic2.png


 


 


The Logic App


 


Download link:


 


Here is the link to the Logic App template.


 


 


Prerequisites:


 


1. User account or Service Principal with Azure Sentinel Responder role


– Create or use an existing user account or Service Principal or Managed Identity with Azure Sentinel Responder role.


– The account will be used in Azure Sentinel connectors (Incident Trigger, Update incident and Add comment to incident) and a HTTP connector.


– This blog will walk you through using System Managed Identity for the above connectors.


 


2. Setup Shifts schedule


– You must have the Shifts schedule setup in Microsoft Teams.


– The Shifts schedule must be published (Shared with team).


 


3. User account with Owner role in Microsoft Teams


– Create or use an existing user account with Owner role in a Team.


– The user account will be used in Shifts connector (List all shifts).


 


4. User account or Service Principal with Log Analytics Reader role


– Create or use an existing user account with Log Analytics Reader role on the Azure Sentinel workspace.


– The user account will be used in Azure Monitor Logs connector (Run query and list results).


 


5. An O365 account to be used to send email notification.


– The user account will be used in O365 connector (Send an email).


 


 


Post Deployment Configuration:


 


1. Enable Managed Identity and configure role assignment.


 


a) Once the Playbook is deployed, navigate to the resource blade and click on Identity under Settings.


 


SystemMI.png


 


          


          b) Select On under the System assigned tab. Click Save and select Yes when prompted.


 


   c) Click on Azure role assignments to assign role to the Managed Identity.


 


          d) Click on + Add role assignment.


 


   e) Select Resource group under Scope and select the Subscription and Resource group where the Azure Sentinel Workspace is located.


       (Note: it’s the subscription and resource group of the Azure Sentinel workspace, not the Logic App).


 


   f) Select Azure Sentinel Responder under Role and click Save.


 


AssignResponderRoleMI.png


 


         


             


2. Configure connections.


 


          a) Edit the Logic App to find the below connectors marked with  pic1.png.


               – When Azure Sentinel incident creation rule was triggered.


               – List all shifts.


               – Run query and list results – Get user with low assignment.


               – Update incident.


               – Add comment to incident.


               – Send an email.


 


           b) We will leverage the Managed Identity we configured in step 1 for the following Azure Sentinel Connectors


               (hint: these are the ones with Azure Sentinel logo):


               – When Azure Sentinel incident creation rule was triggered.


               – Update incident.


               – Add comment to incident.


 


               i) On the first connector (trigger), select Add new


ConfigureMIConnection0.png


 


               ii) Click “Connect with managed Identity”.


ConfigureMIConnection.png


 


  iii) Specify the connection name and click Create.


ConfigureMIConnection2.png


 


  iv) On the remaining Azure Sentinel Connectors, select the connection you created earlier.


ConfigureMIConnection3.png


 


     c) Next, fix the below remaining connectors by adding a new connection to each connector and sign in with the accounts described under prerequisites.


– List all shifts.


– Run query and list results – Get user with low assignment.


– Send an email.


 


 


3. Select the Shifts schedule


 


a) On the List all shifts connector, click on the X sign next to Team field for the drop-down list to appear.


 


Pic3 (1).png


 


b) Select the Teams channel with your Shifts schedule from the drop-down list.


 


ListAllShifts.png


 


c) Save the Logic App once you have completed the above steps.


 


 


Assign the Playbook to Analytic Rules using Automation Rules


 


1) Before you begin, ensure you have the following permissions:


    – Logic App Contributor on the Playbook.


    – Owner permission on the Playbook’s resource group (to grant Azure Sentinel permission to the playbooks’ resource groups).


 


2) Next, create an Automation Rule to assign the Playbook to your analytic rules with you specified conditions.


 


3) In the below example, I am creating an Automation Rule to run the incident assignment Playbook for selected Analytic rules and the severity equals to “High” and “Medium”.


 


AutomationRule.png


 


4) Uder Actions -> select Run Playbook and choose the Playbook.


 


    Note: If the Playbook appeared as grey-out in the drop-down list, that means Azure Sentinel doesn’t have permission to run this Playbook.


 


          RunPlaybookPermission.png


You can grant permission on the spot by selecting the Manage playbook permissions link and grant permission to the playbooks’ resource groups.


 


RunPlaybookPermission2.png


 


5) After that, you will be able to select the Playbook. Click Apply.


 


RunPlaybookPermission3.png


 


Note: If you received the error message “Caller is missing required Playbook triggering permissions” when saving the Automation Rule, that means you do not have the “Logic App Contributor” permission on the Playbook.


 


 


Incident Assignment Logic


 


1) When an incident is generated, it triggers the Logic app to get a list of analysts who are on-shift at that time (analysts with time-off will be excluded from the incident assignment).


 


2) Analyst with the least incidents assigned on the current shift will be assigned incident first. When there are multiple analysts with same incident count, the selection be will based on the order of the analyst’s AAD objectId.


 


3) Analysts must have at least 1 hour left (default value) in their shift to be eligible for assignment.


    For example, if the shift of an analyst is ending at 6pm. The analyst will not be assigned between 5pm and 6pm.


 


    You can change the variable value of “ExpectedWorkHoursPerIncident” to 0 if you want the analyst to be assigned during the final shift hour.


 


pic4.png


 


4) Here is a sample assignment flow for your reference:


 


    In this example, the following shift schedules have been configured for 4 analysts.


 


























User Object Id



Shift Schedule



A1



8am to 6pm



A2



8am to 6pm



A3



4pm to 2am



A4



4pm to 2am



 


Here is how the incident assignment would work based on the incident assignment logic:


 



















































Incident Creation Time



Assign to



Total



 


8:00am



 


A1



A1=1


A2=0


A3=0


A4=0



 


9:45am



 


A2



A1=1


A2=1


A3=0


A4=0



 


2:00pm



 


A1



A1=2


A2=1


A3=0


A4=0



 


4:00pm



 


A3


 


 



A1=2


A2=1


A3=1


A4=0



 


4:10pm



 


A4



A1=2


A2=1


A3=1


A4=1



 


5:00pm



 


A3


 


*A3 is assigned instead of A2 because ExpectedWorkHoursPerIncident is set to 1.



A1=2


A2=1


A3=2


A4=1



 


5:50pm



 


A4


 


*A4 is assigned instead of A2 because ExpectedWorkHoursPerIncident is set to 1.



A1=2


A2=1


A3=2


A4=2



 


11:20pm



 


A3



A1=2


A2=1


A3=3


A4=2



 


 


 


Notification


 


Email Notification:


 



  1. When an incident is assigned, the incident owner will be notified via email.


 



  1. The email body has a direct link to the incident page and a banner with color mapped to incident’s severity (High=red, Medium=orange, Low=yellow and Informational=grey).


 Email.png


 


 


Incident Comment:


 



  1. Comment will be added to the incident for the assignment with the name of the Playbook.


 Comment.png


 


 


 


 


Managing Incident assignment for multiple Support Groups


 


There are times when you need to assign incidents based on different incident types and support groups. For example, Team A is responsible for Azure AD incidents, Team B is responsible for Office 365 incidents while the rest of the incidents will go to Team C.



This can be achieved by creating Shifts schedule for each support group and deploy a separate Playbook for each group. Then, assign the Logic App to the analytic rules accordingly as illustrated in the diagram below:


 


MultipleSupportGroup.png


 


 


Below are the sample Automation Rules created for multiple Shifts channels (Support Groups).


 


TeamABC.png


 


Each Automation Rule is configured for different Team:


 


EditAutomationTeamA.png


Automation Rule for Team A 


 


 


 


EditAutomationTeamB.png


Automation Rule for Team B


 


 


Summary


 


I hope you find this useful. Give it a try and hopefully it would help in reducing the time of acknowledgement (especially for critical incidents) in your environment.


 


 


Special thanks to @liortamir , @Yaniv Shasha , @edilahav and @Ofer_Shezaf for the review.

Azure Sentinel Side-by-Side with Splunk via EventHub

Azure Sentinel Side-by-Side with Splunk via EventHub

This article is contributed. See the original author and article here.

As highlighted in my last blog posts (for Splunk and Qradar) about Azure Sentinel’s Side-by-Side approach with 3rd Party SIEM, there are some reasons that enterprises leverage Side-by-Side architecture to take advantage of Azure Sentinel capabilities.


 


For my last blog post I used the Microsoft Graph Security API Add-On for Splunk for Side-by-Side with Splunk. Another option would be to implement a Side-by-Side architecture with Azure Event Hub. Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process events per second (EPS). Data sent to an Azure Event Hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.


 


This blog describes the usage of Splunk app Splunk Add-on for Microsoft Cloud Services in Side-by-Side architecture with Azure Sentinel.


 


For the integration, an Azure Logic app will be used to stream Azure Sentinel Incidents to Azure Event Hub. From there Azure Sentinel Incidents can be ingested into Splunk.


 


Let’s go with the configuration!


 


Preparation


The following tasks describe the necessary preparation and configurations steps.



  • Onboard Azure Sentinel

  • Register an application in Azure AD

  • Create an Azure Event Hub Namespace

  • Prepare Azure Sentinel to forward Incidents to Event Hub

  • Configure Splunk to consume Azure Sentinel Incidents from Azure Event Hub

  • Using Azure Sentinel Incidents in Splunk


 


Onboarding Azure Sentinel


Onboarding Azure Sentinel is not part of this blog post. However, required guidance can be found here.


 


Register an Application in Azure AD


The Azure AD app is later required to use it as service principle for the Splunk Add-on for Microsoft Cloud Services app.


 


To register an app in Azure AD open the Azure Portal and navigate to Azure Active Directory > App Registrations > New Registration. Fill the Name and click Register.


 


Screenshot 2021-04-29 162548.png


 


Click Certificates & secrets to create a secret for the Service Principle. Click New client secret and make note of the secret value.


 


Screenshot 2021-04-29 162742.png


 


For the configuration of Splunk Add-on for Microsoft Cloud Services app, make a note of following settings:



  • The Azure AD Display Name

  • The Azure AD Application ID

  • The Azure AD Application Secret

  • The Tenant ID


 


Create an Azure Event Hub Namespace


As next step create an Azure Event Hub Namespace. You can use an existing one, however for this blog post I decided to create a new one.


 


To create an Azure Event Hub Namespace open the Azure Portal, and navigate to Event Hubs > New. Define a Name for the Namespace, select the Pricing Tier, Throughput Units and click Review + create.


 


Screenshot 2021-04-29 162948.png


 


Review the configuration and click Create.


 


Screenshot 2021-04-29 163041.png


 


Once the Azure Event Hub Namespace is created click Go to resource to follow the next steps.


 


Screenshot 2021-04-29 163135.png


 


Click Event Hubs, after to Event Hub to create an Azure Event Hub within the Azure Event Hub Namespace.


 


Screenshot 2021-04-29 163234.png


 


Define a Name for the Azure Event Hub, configure the Partition Count, Message Retention and click Create.


 


Screenshot 2021-04-29 163340.png


 


Navigate to Access control (IAM) and click to Role assignments. Click + Add to add the Azure AD Service Principle created before and delegate as Azure Event Hubs Data Receiver and click Save.


 


Picture2.png


 


For the configuration of Splunk Add-on for Microsoft Cloud Services app, make a note of following settings:



  • The Azure Event Hub Namespace Host Name

  • The Azure Event Hub Name


 


Prepare Azure Sentinel to forward Incidents to Event Hub


For the forwarding for Azure Sentinel Incidents to Azure Event Hub you need to firstly configure an Azure Logic App, and secondly an Automation Rule in Azure Sentinel to trigger the playbook for any Incidents in Azure Sentinel.


 


For my scenario I configured an Azure Logic App as following shown:


 


Screenshot 2021-04-29 163523.png


 


Startwith the Azure Sentinel trigger When Azure Sentinel Incident Cration Rule was Triggered. Parse the output for later usage. For the Azure EventHub connection, define first the connection to Azure Event Hub and select the Azure EventHub name. Define a JSON format as content to send selected fields from an Azure Sentinel Incident to Azure EventHub. For my case I want to forward the fields Title, Severity, ProviderName and the IncidentURL to Azure EventHub.


 


You can also have the full Body from Parse JSON output as well, to forward all attributes of an Azure Sentinel Incident.


 


Screenshot 2021-04-29 163623.png


 


Save the Azure Logic App and navigate to Azure Sentinel > Automation. From here you can create an Automation rule to trigger the Azure Logic App, created in previous step.


 


Click to + Create and select Add new rule.  


 


Screenshot 2021-04-29 163757.png


 


Define a Name for the Automation rule name and define the Conditions. As I want to trigger the Azure Logic App for any Analytics rule in Azure Sentinel, I let the Condition as is – “all” (for “all rules” is selected, you can choose specifc rules to inculde or exclude. Select the Run Playbook as Action and the Azure Logic App created before and click Apply.


 


Picture3.png


 


Once the configuration is completed, you can review the Automation rule in Automation page.


 


Configure Splunk to consume Azure Sentinel Incidents from Azure Event Hub


 


To ingest Azure Sentinel Incidents forwarded to Azure Event Hub there is a need of to install the Splunk App, Splunk Add-on for Microsoft Cloud Services.


 


For the installation open the Splunk portal and navigate to Apps > Find More Apps. For the dashboard find the Splunk Add-on for Microsoft Cloud Services app and Install.


 


Picture4.png


 


Once installed, navigate to App Splunk Add-on for Microsoft Cloud Services > Azure App Account to add the Azure AD Service Principles, and use the noted details from previous step. Click  Add and define a Name for the Azure App Account, add the Client ID, Client Secret, Tenant ID and choose Azure Public Cloud as Account Class Type. Click Update to save and close the configuration.


 


Picture5.png


 


Now navigate to Inputs within the Splunk Add-on for Microsoft Cloud Services app and select Azure Event Hub in Create New Input selection.


 


Picture6.png


 


Define a Name for the Azure Event Hub as Input, select the Azure App Account created before, define the Event Hub Namespace (FQDN), Event Hub Name, let the other settings as default and click Update to save and close the configuration.


 


Picture7.png


 


Using Azure Sentinel Incidents in Splunk


 


Once the ingestion is processed, you can query the data by using sourcetype=”mscs:azure:eventhub” in search field.


 


Picture8.png


 


Summary


 


We just walked through the process of how to implement Azure Sentinel in Side-by-Side with Splunk by using the Azure Event Hub.


 


Stay tuned for more us cases in our Blog channel!


 


Thank you for


 


Many thanks to Clive Watson for brainstorming and ideas for the content.

[Guest Blog] Touching Light: Making Music in Mixed Reality

[Guest Blog] Touching Light: Making Music in Mixed Reality

This article is contributed. See the original author and article here.

This blog is written by Ian Riley, an inspiring musician, as a part of the Humans of Mixed Reality series. He shares his experience in music and technology, which led him to developing music in mixed reality. 


 


touching light cover.png


Touching Light is an original musical work for Percussionist and Mixed Reality Environment that explores the border areas between the physical world that we see around us, and the worlds of infinite possibility that each of us holds in our imagination.


 


 


“A dream we dream together is called reality.”
          – Alex Kipman at the Microsoft Ignite Keynote, 2021

Mixed Reality, fundamentally, asks us to see the world differently, something that is so akin to the ways that as performers, we ask our audiences not just to hear, but to listen. By drawing the attention of those around us to something that we believe to be compelling, and even more when we can share something that we have had a hand in creating, we access a unique moment, a shared imaginative space and, in my experience, this is just the sort of thing that users of Mixed Reality are hoping to find.

“My dad’s a computer programmer.” I usually lead with this as it seems to put folks at ease when they contact me, hoping that there is some ‘secret’ for how I, someone with a doctorate in music, not computer science, learned to work with Mixed Reality. Yet, while his influence has certainly been a continual inspiration to me, it was in fact my mother’s encouragement to pursue training in the arts that positioned me to begin developing Touching Light. Despite its deep connectedness to technology, Touching Light is first a foremost a musical MR application.


 


Music and Technology


It was in pursuit of my master’s degree that I first became deeply interested in music technology. I was fascinated by the sounds that electronic instruments could create, and that curiosity would eventually lead me to perform an all percussion and live electronics final recital during my first graduate degree. This sort of recital was a first for the small college that I was attending and, though I was unaware of this at time, something that is still uncommon in the world of contemporary percussion. Those experiences would eventually lead me to pursue a DMA in Percussion Performance at West Virginia University with a desire to continue to explore and innovate with percussion and live electronics.


 


When I first started my DMA, I was aware of the work that Microsoft was doing with the HoloLens 1 (introduced in 2016), but it wasn’t until my wife and I moved to Morgantown, West Virginia that I saw the first marketing for the Microsoft HoloLens 2 on February 24th, 2019. I was amazed. Watching it again today still makes me smile, but I guess that’s good marketing for you! As I continued my studies at WVU, I kept thinking about that video, about the HoloLens 2, and about Mixed Reality. What seemed like a pipe dream in February, making music in Mixed Reality, would become a real possibility in mind in November of that same year.


 


Look toward the future – stop thinking about what is cutting edge right now and to start thinking about the cutting edge of the cutting edge; because that’s where we’re going to need people to do work.
          – Dr. Norman Weinberg, at PASIC 2019

And I knew that the future was Mixed Reality.


 


simplicity screenshot.png

 Playing vibraphone while using a holographic audio mixer from Touching Light

 


Preparing for HoloLens 2


Sometimes it is the mere fact that you know what you don’t know that can provide the clearest path forward. Soon after the reveal of the HoloLens 2 in early 2019, the first seeds of what would eventually become Touching Light began to take root. At the time, while I had done some minimal computer programming experience from high school (Java, and some HTML), since I began studying music in college, I had had little time or reason to engage with the ‘coding’ side of technology apart from some basic formatting for websites.


 


Knowing that the HoloLens 2 would likely run on something like C# or Visual Basic, I began thinking about other ways that I could engage with code-based music technology and would eventually teach myself how to build rudimentary circuits to trigger lighting and audio effects. Concurrent to this work, I also more fully invested myself into learning about audio recording and engineering, recording and editing my own performance videos from recitals and other concerts. Yet for all this experience, I still didn’t know how to program the HoloLens 2.


 


Learning Mixed Reality


When the first news of the global coronavirus pandemic entered the public awareness in the United States, it was met by a mixture of genuine concern, reasonable skepticism, and in some cases, outright dismissal. Living in West Virginia, the scope of the pandemic didn’t really hit home until the University received email correspondence from university president outlining the realities of campus closures, and the transition to online delivery for the remainder of the semester as the university endeavored to minimize the risk to the WVU community in the face of uncertain times. In the face of what seemed at the time to be indefinite lockdown, I found myself able to do what anyone would do with a sudden abundance of free time… learn how to code for Mixed Reality!


 


Over the course of the next several months, particularly during the summer of 2020, through a series of free tutorials, I learned the basics of 3-D modeling using a program called Blender, a modeling engine that is similar in many ways to the sort of interface I would eventually work with in Unity. Upon ordering a HoloLens 2 from Microsoft in early July, I quickly transitioned to Unity while familiarizing myself with the sorts of gestures and interactions that drive the HoloLens 2 holographic interface.


 


With all the components finally in hand, then began the work of writing, rehearsing, and performing Touching Light. Core to the performative practice of music, and particularly to that of the percussionist the same sorts of interactions that I already employed as a performer would serve as the conceptual framework from which the three ‘dimensions of translucence’ would be derived. These dimensions (modeled after the three coordinate dimensions in physical space) would serve to ground my creative work in the sorts of real decisions that I already knew how to make because of my work with percussion.


 


soliloquy screenshot.png

Improvising on a marimba in response to a rotating carousel of landscapes 

 


Developing Music in Mixed Reality


I knew that I wanted Touching Light to be mobile. The promise of the HoloLens 2, and Mixed Reality in general, is that there are ‘no strings attached;’ if you wear this device, that is all you need to enter a Mixed Reality environment. I intentionally connected that idea of mobility to the sorts of interactions and environments that the user engages throughout the work. Even Soliloquy, the second movement of Touching Light which features a large carousel of static images, does not extend far beyond the anticipated ‘near-field’ (that which is within reach) that a percussionist will be used to engaging with. Everything in Touching Light, whether virtual or physical follows the design ethos of ‘always being within reach.’


 


The unique opportunity to engage music-making and Mixed Reality is not something that I take lightly; what began as a pipe dream just over a year ago has had a significant impact on the ways that I engage with both music and technology. I was pleasantly surprised to discover that Mixed Reality is a profoundly creative medium, and as such, engages easily with the process of music-making. From the deeply satisfying manipulation of a standing wave through the miniscule gestures of a rotating hand, to the shocking immersion of a massive holographic carousel slowly rotating around you while you perform, there is something much more connective about the spatial interactions presented by MR than the limitations of peripherals like a mouse and keyboard to control those same musical and visual elements.


 


synecdoche screenshot.png

Exploring tuned Thai gongs while manipulating spatialized virtual instruments 


 

Making Music in Mixed Reality (How to Get Started, and Why You Should)


Already, so much of what we do as musicians is, within the context of society at large, a niche endeavor; for the percussionist, these degrees of separation can seem even more severe. But in the same ways that we as artists commit ourselves to the craft of music, and the practice of music-making, engaging with MR has only served to deepen those sorts of commitments for me.


 


For Musicians or (“Performers”)


For those individuals who are interested in the musical side of Mixed Reality, the first step to get your hands on a platform. Touching Light is obviously designed with the Microsoft HoloLens 2 in mind, but similar functionality is available through any number of other VR headsets. Once you have a platform, you will need to decide what you will perform. If you are working with the Microsoft HoloLens 2, a great place to start is with Touching Light! You can download the complete Unity file package here. Follow the instructions from the Microsoft Mixed Reality Documentation, beginning at “1. Build the Unity Project.” Once you have deployed the application to your HoloLens 2, load up the application, and explore!


 


One of the most profound discoveries that I have made while working with this technology is just how musical it can be. There is something about engaging with technology within the Mixed Reality volume, about ‘spatial computing,’ that seems intuitive and artistic. This simple fact has even more deeply convinced me that music-making in Mixed Reality is not just an interesting possibility, but a deeply meaningful inevitability.


 


For Programmers (or “Composers”)


For those individuals who may be more interested in the nuts-and-bolts of developing musical applications for Mixed Reality, the first step is to familiarize yourself with a compiler. If you are interested in programming for the Microsoft HoloLens 2, the de facto solution at present is the Unity Development Engine, though support for other compilers is becoming increasingly available. You can download the Unity Hub for free from their website, and then following the instructions in the Microsoft Mixed Reality Documentation, beginning at “1. Introduction to the MRTK tutorials,” you can begin to develop your first Mixed Reality application.


 


I would strongly advise that, once you get a handle on the basic functionality of the compiler and complete some of the beginning MRTK tutorials, take some time to consider what sorts of functionality you would like your application to demonstrate, the connect with the Microsoft MR community (via Slack or the Microsoft MR Tech Community forums) and connect with other who may be able to answer your questions, and even help you with your project design.


 


Throughout the development process of Touching Light, I was surprised at not only how easy it was to onboard myself to Mixed Reality development by using the MRTK, but also by how friendly and helpful the then-current MR development community was. Whenever I had a question, or was struggling with some element of implementation, I would quickly be directed to the relevant documentation, YouTube video, or other resource that very often addressed the exact issue I was having without ever need to post snippets of code or consult more directly with someone on the project. As a bonus, I was also able to connect with a handful of individual who had a particular interest in developing creative applications for the HoloLens 2.


 


Touching Light


I had the distinct opportunity to present Touching Light in a public recital on Saturday, May 1st, 2021. 

 


Only the beginning


Touching Light is only the beginning. It is my sincere hope that this project will serve to orient, assist, and inspire musicians, artists, and audiences alike as we continue to navigate an increasingly digital and virtual existence. Perhaps more than any other time in history, only compounded by the incredible circumstances surrounding global health and the subsequent impact that a response to such scenarios require, we have been forced to think differently about technology, and for those of us who found ourselves suddenly unable to engage in live musical performances, neither as artists nor audiences, it is my conviction that mediums like Mixed Reality will only become more essential to exploring ‘liveness’ within the context of digital and virtual spaces.


 


The work was designed during the global coronavirus pandemic of 2020-21 and it is my hope that Touching Light reminds each of us that, despite everything, we are never truly alone; there is a world beyond this one if we are only willing to reach out and touch it.


 


faculty photo 2.jpg


A photo with members of the WVU Percussion Faculty after the recital
[from left: Pf. Mark Reilly, Dr. Mike Vercelli, Ian Riley, and Pf. George Willis]


 


Resources for Making Music in Mixed Reality


Microsoft HoloLens 2


Unity Hub


Microsoft MRTK & MR Tutorials


HoloDevelopers Slack Channel


Microsoft MR Tech Community Forums


Touching Light Source Code


ianrileypercussion.com 


Riley, Ian T. “Touching Light: A Framework for the Facilitation of Music-Making in Mixed Reality.”
     West Virginia University
, West Virginia University Press, 2021.


Meet the 2021 Imagine Cup World Championship judges

Meet the 2021 Imagine Cup World Championship judges

This article is contributed. See the original author and article here.

The stage is set for the 19th annual Imagine Cup World Championship, taking place during Microsoft Build’s digital experience on May 25. Four finalist teams from across the world are bringing their innovations for impact to showcase globally. Focused on four social good categories – Earth, Education, Healthcare, and Lifestyle – their ideas encompass the Imagine Cup’s mission to empower every student to apply technology to solve issues in their local and global communities.  


 


In the 2021 competition, students reimagined a future through projects guided by accessibility, sustainability, inclusion, equality, and passion. Submitted solutions covered a variety of current issues, including a 3D sign-language animation, a virtual game to combat social isolation, an early detection platform for Parkinson’s Disease, an intelligent bee keeping system, and more.   


 


On May 25, our four finalists will present their innovations for the chance to take home USD75,000 and mentorship with Microsoft CEO, Satya Nadella. A panel of expert World Championship judges will assess each project. With combined industry and personal experience in diversity leadership, startups, founding businesses, and applying tech for social impact, our judges will apply their knowledge to evaluate the most inclusive and original solution with the potential to make a global difference.  


 


Imagine Cup judges dedicate their personal time and experience to help empower the next generation of developers. We’ve been fortunate to have a diverse panel of industry experts, from around the world, leading up to the World Championship, including Devendra Singh, CTO at PowerSchool, Kai Frazier, Founder at KaiXRNeil Sebire, Chief Clinical Data Officer at HDK UK, and Jason Goldberg, Chief Commerce Strategy Officer at Publicis, and more.  


 


For the first time in Imagine Cup history, we are pleased to introduce a panel of all women judges for the World Championship. During the competition, each team will pitch their project and demo their technology, followed by questions from judges. Who will take home the trophy? Join our hosts, Tiernan Madorno, Microsoft Business Program Manager, and Donovan Brown, Microsoft Principal Program Manager, and tune into the show on May 25 at 1:30pm PT to find out!  


 


Meet the World Championship judges 


 


Student_Developer_Team_0-1620401750658.jpeg


Jocelyn Jackson – National Society of Black Engineers National Chair, 2019-2021 


 


Student, researcher, leader, and change agent are just a few descriptors of Jocelyn Jackson. In her final term as the National Chair of the National Society of Black Engineers (NSBE), Jocelyn led NSBE through one of the hardest years it has faced. Through the COVID-19 pandemic as well as the racial injustice reckoning in America, Jocelyn stayed dedicated to using her leadership and voice to make a difference in the lives of other young Black men & women interested in engineering, and to make engineering a more diverse and accepting field for all. As National Chair, Jocelyn made massive strides to accomplish the current strategic goal of NSBE: 10K by 2025, or to graduate 10,000 Black engineers annually by 2025 by launching NSBE’s newest 5 year strategic plan ‘Game Change 2025.’ During her last 3 years at NSBE, Jocelyn managed & led the board of directors to ensure the best overall experience of NSBE stakeholders.  


 


Originally from Davenport, Iowa, Jackson received her bachelor’s and master’s degrees in mechanical engineering at Iowa State University, where her thesis research focused on the development of elastomeric coatings with reduced wear for ice-free applications. She is a second-year doctoral student in Engineering Education Research at the University of Michigan. Her current research works toward advancing equity in STEM and STEM entrepreneurship.  
 


Student_Developer_Team_1-1620401750663.jpeg


Enhao Li – Co-Founder and CEO of Female Founder School 
 


Enhao Li is the Co-Founder and CEO of Female Founder School. Enhao studied Economics at Harvard and in a former life was an investment banker for fast-growing technology companies – helping to take companies like Pandora public, but she was always itching to be a founder herself. It wasn’t until she finally took the leap and started on her own company did she discover just how unprepared she was; she did all of the wrong things, wasted time and money, only to finally learn that there was a way to do this. Since then, she has become obsessed with learning how to build successful companies from experienced founders and investors and sharing it with new founders. That is where Female Founder School came from – her own personal experiences and a mission to make it easier for anyone especially women to build successful companies of their own.  



Student_Developer_Team_2-1620401750670.jpeg


Toni Townes-Whitley – President, US Regulated Industries, Microsoft   
 
As president of US Regulated Industries at Microsoft, Toni Townes-Whitley leads the US sales strategy for driving digital transformation across customers and partners within the public sector and commercial regulated industries.  With responsibility for the 4900+ sales organization and ~$15B P&L, she is one of the leading women at Microsoft, and in the technology industry, with a track record for accelerating and sustaining profitable business and building high-performance teams.  


 


Her organization is responsible for executing on Microsoft’s industry strategy and go-to-market for both public sector and regulated industries in the United States, including Education, Financial Services, Government, and Healthcare. In addition to leading a sales organization, Townes-Whitley is helping to steer the company’s work to address systemic racial injustice – with efforts targeted both internally at representation and inclusion; as well as externally at leveraging technology to counter prevailing societal challenges. She has developed expertise and speaks publicly about “Civic Technology”, applying tech innovation for social impact.  


 


——————————– 


Don’t miss out on the chance to see which team will win it all at the Imagine Cup World Championship! Plus, as a student at Microsoft Build, you can enhance your own developer skills and prepare to create the next great project. Register at no cost for the Student Zone now 

Model Lifecycle Management for Azure Digital Twins

Model Lifecycle Management for Azure Digital Twins

This article is contributed. See the original author and article here.

Model Lifecycle Management for Azure Digital Twins


Author – Andy Cross (External), Director of Elastacloud Ltd, a UK based Cloud and Data consultancy
Azure MVP, Microsoft RD.


 


Ten years ago, my business partner Richard Conway and I founded Elastacloud to operate as a consultancy that truly understood the value of the Cloud around data, elasticity and scale; building next generation systems on top of Azure that are innovative and impactful. For the last year, I’ve been leading the build of a Digital Twin based IoT product we call Elastacloud Intelligent Spaces.


 


When working with Azure Digital Twins, customers often ask what the best practice is for managing DTDL Versions. At Elastacloud, we have been working with Azure Digital Twins for some time and I’d like to share the approach we developed to manage our DTDL model lifecycles from .NET 5.0.


 


What is DTDL?


If you are not familiar with Azure Digital Twins and DTDL, Azure Digital Twins is a PaaS service for modelling related data such as you’d often find in real world scenarios. It is a natural fit for IoT projects, since you can model how a sensor relates to a building, to a room, to a carbon intensity metric, to their enclosing electrical circuit, to an owner, to neighboring sensors and their respected metrics, owners, rooms and so on. It is a Graph Database, which focusses on the links that exist in the graph, giving it the edge over more commonly found relational databases, since it features the ability to rapidly and concisely traverse data by its links across a whole data set.


 


Azure Digital Twins adopts the idea that the nodes on the graph (known as Digital Twins) can be typed. This means that the store of Entities that holds the data are in defined sets of shapes that are defined in Digital Twin Definition Language. The definition language allows developers to constrain the data that an entity can store, in a list of contents. These are broadly synonymous with the notion of columns in a traditional relational database. Just like in other database systems, when a development team iterates on a data structure to add a property, edit or remove one, the development team has to consider how to keep the software and the data structure in sync.


 


What is the Version challenge?


Models in DTDL are stored in a JSON format, and therefore typically stored as a .json file. We store these in a git repository right alongside the code that interacts with the data shapes that they define.


 


The key question of the Version Challenge therefore is: “When I update my model definitions in my local dev environment, how do I automatically update the models that are available in Azure Digital Twin?”


 


There is one additional twist, when you want to use a model, for example to create a new digital twin, you have to know the version number of the model that you want to create. This means your software needs to also be kept in sync with your models, and your deployment.


 


In order to keep track of all this, each Azure Digital Twin model has a model identifier. The structure of a Digital Twin Model Identifier (DTMI) is:


 

dtmi:[some:segmented:name];[version]

 


 


For example:


 

dtmi:com:elastacloud:intelligentspaces:room;168

 


 


Our solution then needs to solve these top-level issues, whilst being developer friendly, and fitting into best practice for deployments.


We might consider this ideal workflow:


A developer workflow that includes continuous deployment of DTDL models as described in the text.A developer workflow that includes continuous deployment of DTDL models as described in the text.

Building Blocks


We want to be able to construct our approach to versioning without prejudicing our ability to use the fullness of ADT features. There are a few main options that present themselves to us:



  1. Hold the JSON representation of the DTDL on disk as a file

  2. Build the JSON representation from a software representation (for instance .NET class)


Both of these are valid cases. The JSON representation reflects the on-the-wire payload. The .NET class might give us the ability to later use this class to create instances of the DTDL defined Twin.


 


Considering this idea, we might consider something like the following:


 

{
  "@id": "dtmi:elastacloud:core:NamedTwin;1",
  "@type": "Interface",
  "contents": [
    {
      "@type": "Property",
      "displayName": {
        "en": "name",
        "es": "nombre"
      },
      "name": "name",
      "schema": "string",
      "writable": true
    }
  ],
  "description": {
    "en": "This is a Twin object that holds a name.",
    "es": "Este es un objeto Twin que contiene un nombre."
  },
  "displayName": {
    "en": "Named Twin Object",
    "es": "Objeto Twin con nombre"
  },
  "@context": "dtmi:dtdl:context;2"
}

 


 


We might then want to create a Plain Old CLR Object (POCO) representation:


 

public class NamedTwinModel
{
  public string name { get; set; }
}

 


 


While we are able to see that the Interface is in alignment with the DTDL definition of contents, it is not immediately apparent how we would manage displayName and globalisation concerns thereof within a POCO.


 


Note that from a purist’s perspective, a POCO should try to avoid attributes where possible, to boost readability. So a [DisplayName(“en”, “name”)] annotated approach is possible, but not ideal.


 


Furthermore, you’ll note that the DTDL wraps the contents which is the type definition, with a set of descriptors and globalization values. In order to achieve this, we might consider a wrapped generic POCO approach:


 

public class Globalisation {
   public string En { get; set; }
   public string Es { get; set; }
}
public class DtdlWrapper<TContents> {
    public T Contents { get; set; }
    public Globalisation Description { get; set; }
}
...
var namedDtdl = new DtdlWrapper<NamedTwinModel>();
namedDtdl.Contents = new NamedTwinModel();
namedDtdl.Contents.name = "what should I put here?";

 


 


The problem we start to face when expressing things in this case for the DTDL definitions themselves, is that we are actually building a class hierarchy that is more akin to the Azure Digital Twin instances than it is to the DTDL definitions. As such, we’re going to have to create instances, then use Reflection over them but ignore their values. We could use default values or lookup the types more directly, but still the problem is the same; class definitions in .NET describe how you can create instances, and don’t directly translate to DTDL in an easy to understand way.


 


Thus, from our perspective, we want to make sure that our description DTDL is native json since there are aspects which are not naturally amenable to encapsulating with a Plain Old CLR Object (POCO). We will use our POCOs to represent instances of Azure Digital Twins, i.e. the data itself, and not the schema.


 


This means we store the DTDL in JSON format on disk. But this isn’t anywhere near the end of the story for versioning and .NET development.


 


We just learned that POCOs can represent instances or Digital Twins quite effectively. If we’re going to code with .NET we will still need to use some kind of class to interact with, in order to do CRUD operations on the Azure Digital Twin.


 


The building blocks are therefore:



  • Raw JSON held as a file

  • POCOs to describe instances of those DTDL defined classes



Versioning


Versioning models in DTDL is achieved in a DTMI using an integer value held in the identifier. From the DTDL v2 documentation :


In DTDL, interfaces are versioned by a single version number (positive integer) in the last segment of their identifier. The use of the version number is up to the model author. In some cases, when the model author is working closely with the code that implements and/or consumes the model, any number of changes from version to version may be acceptable. In other cases, when the model author is publishing an interface to be implemented by multiple devices or digital twins or consumed by multiple consumers, compatible changes may be appropriate.


 


Firstly, mapping POCOs to DTDL in the way we have discussed requires that we choose to actively validate against DTDL, passively validate or don’t validate at all. Some options:



  • Active; we build a way to check whether a DTDL model exists in Azure Digital Twins on any CRUD activity, that the properties match in name and type

  • Passive; we do similarly to Active, but use JSON files as the validation target, and assume that the JSON files are in-line with the target database

  • None; we don’t validate, but instead lead Azure Digital Twins error if we get something wrong, and we react to that error.


In our approach, we want to be able to support either radical or compatible changes but we will have to consider some additional factors brought in by .NET type constraints:



  • if a DTDL interface changes types, the .NET POCO properties that exist must match its DTDL values

  • if a DTDL interface changes its named properties, the .NET POCO needs to be updated to reflect this

  • if a DTDL interface adds a new property, we need to decide whether it’s an error or not for the POCO to not have the property. This is a happy problem, as we’re roughly compatible even if we don’t add the property.

  • if the DTDL interface deletes a property, we need to decide whether we do create and update methods but omit that value at runtime.


A workflow that shows the order of checking a Model Existence and the states that it may be in.A workflow that shows the order of checking a Model Existence and the states that it may be in.

Applying Versioning

Once we have our DTDL prepared in JSON, we still need to get these into Azure Digital Twins. We have a few choices again to make around how we want to handle versioning.


 


The absolute core of creating Azure Digital Twins DTDL models from a .NET perspective is to use the Azure.DigitalTwins.Core package available on NuGet, to create the models. In short:


 

// you need to setup three variables first; tenantId, clientId 
// and adtInstanceUrl. var 
credentials = new InteractiveBrowserCredential(tenantId, clientId);
DigitalTwinsClient client = new DigitalTwinsClient(new Uri(adtInstanceUrl), credentials);
await client.CreateModelsAsync(new string[] { "DTDL Model in JSON here..." } );

 


 


That’s the core of creating those DTDL models. We could just load the JSON files directly from disk as a string and add it to the array passed to CreateModelsAsync, however we have options to employ that might help us out in the future.


 


For example, we can get the existing models by calling client.GetModelsAsync. We can iterate on these models and check whether our new models to create share a @id including the version. If this is the case we can validate whether the contents are the same, and choose to throw an exception if not, if we are seeking to maintain a high level of compatibility.


 


Should we find that a model exists for a previous version (i.e. our JSON file has a higher dtmi version) we can choose to decommission that model. This is a one way operation, so we better be careful to do this in a managed fashion. For instance, we might want to decommission a model only after it has been replaced for a period of time, so that we may have live-updates to the system. If this is the case, we should be comfortable that all writers to the Azure Digital Twin have been upgraded.


 


When a model is decommissioned, new digital twins will no longer be able to be defined by this model. However, existing digital twins may continue to use this model. Once a model is decommissioned, it may not be recommissioned.


Anyway, should we choose to do that, once a model is created (say dtmi:elastacloud:core:NamedTwin;2) we might choose to decommission the previous version:


 

await client.DecommissionModelAsync("dtmi:elastacloud:core:NamedTwin;1");

 


 


The key thought process around Decommissioning relates to the choice you want to make around version compatibility with your code. The idea we take at Elastacloud is that we want to be able to be sure that the latest Git-held version of the DTDL model is available but also that previous versions should also be available for a period of time that we consider to be an SLA, until we are sure that all consumers have been updated to the latest version.

 


A strategy for decommissioning DTDL Models in Azure Digital Twins, shown as a workflow that checks an SLAA strategy for decommissioning DTDL Models in Azure Digital Twins, shown as a workflow that checks an SLA

Other Considerations

Naming standards between .NET and JSON are different. We should name according to the framework that hosts the code, and use Serialization techniques to convert between naming divergences. For example, Properties in .NET start with a capital letter in many circumstances, whereas in JSON they tend to start with lowercase.


 


DTDL includes a set of standard semantic types that can be applied to Telemetries and Properties. When a Telemetry or Property is annotated with one of these semantic types, the unit property must be an instance of the corresponding unit type, and the schema type must be a numeric type (double, float, integer, or long).


.NET Tooling Approach


So far we have a few key components that we have to build in order to hit our best practice goal.



  • A .NET application that deploys the models to the Azure Digital Twin instance. That understands versions of DTDL that are already deployed, and the versions held locally, and helps assert compatibility.

  • A .NET application that holds POCOs that can represent DTDL deployed to Azure Digital Twins and can help marshal data between .NET and Azure Digtal Twins.


This helps us define two main categories of error conditions; deployment and runtime.


A tooling approach to deploying Azure Digital Twin DTDL model changesA tooling approach to deploying Azure Digital Twin DTDL model changes


CI/CD deployment


At Elastacloud we use our own `twinmigration` tool for managing this process. The tool is a dotnet global tool that we built and that provides features designed for CI/CD purposes.


 


Since a dotnet global tool is a convenient way of distributing software into pipelines, we add a task to our CI/CD pipeline that takes the latest version of JSON files from a git repo, and validates them against what is already deployed in an ADT instance.


Following the output of a validation stage, we might choose to also run a deploy stage. This will do the action of adding the models to an Azure Digital Twin.


 


Finally, we have a decommissioning step which causes “older” models to be made unavailable for creation, so that we can keep good data quality practices.


 


In Summary


For more information about what we’re doing with Azure Digital Twins, visit our website at Intelligent Spaces — Elastacloud, we’ll be updating it regularly with information on our approaches. We have some tools that are ready to go, such as NuGet Gallery | Elastacloud.TwinMigration that help you to do the things we’ve described here!


 


Thanks for reading. 

Gogo soars through industry contraction by switching to Azure AD

This article is contributed. See the original author and article here.

Hello! In today’s “Voice of the Customer” blog, Chris Szorc, Director of IT Engineering for Gogo, explains how the company cut costs and streamlined their identity and access management as the pandemic was grounding their airline partners, drying up revenue, and forcing thousands of employees to work remotely. By leveraging their existing Azure subscription, Chris and her IT team were able to migrate thousands of internal and external users to Microsoft Azure Active Directory for simplified, secure access across their enterprise.


 


Editor’s Note:


This story began in May 2020 when Gogo served both Commercial Aviation and Business Aviation. In December 2020, Gogo’s Commercial Aviation business was sold to Intelsat. As a result, the structure and business model has changed drastically for Gogo, which now has approximately 350 employees and is solely focused on serving Business Aviation. 


 


 


How to cut costs and simplify IAM during hard times


By Chris Szorc, Director of IT Engineering for Gogo


In 2020, Gogo was a provider of in-flight broadband internet services for commercial and business aircraft. We were based in Chicago, Illinois with 1,100 employees, and at the time we equipped more than 2,500 commercial and 6,600 business aircraft with onboard Wi-Fi services, including 2Ku, our latest in-flight satellite-based Wi-Fi technology.


 


As we all know, 2020 wasn’t a great year for the airline industry. Last May, the pandemic had drastically shrunk our revenue, forcing the company to cut costs wherever possible. A looming three-year renewal contract with Okta prompted my IT team to consider bringing all our identity and access management (IAM) under the Microsoft umbrella to cut costs and simplify access.


 


Favor security and simplicity


Pulling off a major migration to Microsoft Azure Active Directory (Azure AD)—when the IT team is shorthanded and working remotely—would be a challenge for anyone. For my team, the first consideration was security. We had to protect our PCI (payment card industry) status, as well as the custom apps that we create with our airline partners. We certify ourselves with ISO (International Organization for Standardization), and we pass our SOX (Sarbanes Oxley Act) audits every year. As it happened, Deloitte was reviewing us, so the industry certifications for Azure AD and Microsoft 365 helped maintain our security standing as well. We made sure to get the most from our Microsoft agreement—including all the security tools in the Microsoft Azure tool set.


 


We were already using on-premises Active Directory, but we wanted a hybrid cloud identity model for the seamless single sign-on (SSO) experience for our users and applications. We collaborate with a lot of airlines and contractors; so hybrid access fits our model. Like us, you might see migration as an opportunity to reduce the number of redundant apps in your user base. At Gogo, we went app by app, figuring out how people were using each of them, and we saw that Microsoft could cover data analytics among other business functions, as well as IAM.


 


We were able to further consolidate and simplify by adopting the full Microsoft 365 suite of productivity tools. Microsoft Teams, in particular, was a hit with users. People were working from home because of the pandemic, and discovered they preferred Teams over Skype. Once our people started asking for it, that gave us the green light to roll out Teams companywide as a unified platform for online meetings, document sharing, and more.


 


Make use of vendor support


Times were tough enough already; we couldn’t allow migrating our multifactor authentication from Okta to Azure AD to disrupt workflow. We knew we couldn’t overwhelm our help desk with calls and tickets; so, we chose to make the migration in waves of 100 users at a time.


My advice—take advantage of all the technical support that’s available. After all, it’s not as if you’ll have a complete test environment to train yourself. You have your production identity, domain, and your services—multifactor authentication, conditional access, sign in—and if you don’t do it right, you’re severely impacting people.


 


No matter how qualified your IT team is, there’s a wealth of knowledge that a good vendor can provide. Microsoft FastTrack was included with our Azure AD subscription. We also used Netrix for guidance on bringing the migration in on time. FastTrack helped us know where to put people and how to organize—their entire mission is built around helping you complete a successful migration.


 


FastTrack also helped us untangle previous IAM implementations that were set up before my team was hired. They showed us where Okta Verify could be replaced with the latest best practices in multifactor authentication, enabling us to deliver simplified, up-to-date security with Azure AD. That’s the kind of issue you rarely anticipate during a migration, and it’s one where the right support proves invaluable.


 


Ensure maximum ROI


At Gogo, we’re already enjoying the advantages that come with unifying our IAM for simplicity and maximum return on investment (ROI). Since adopting Teams and other Microsoft 365 apps, we’ve been able to drop other services like Box and Okta—that saves the company money.


 


We’re doing federated sharing with Microsoft Exchange Online, sharing calendars with partner tenants, which has been great for planning meetings. We do entitlement management to set up catalog access packages with expiration policies, to stage workflow and access reviews for vendors and collaborators, rather than give them identities in our Gogo directory.


 


Our IT team seized on migration as an opportunity to implement Azure AD’s self-service password reset feature, which allows users to reset their password without involving the help desk. The decision to simplify your IAM solution will likely pay off in more ways than you can anticipate. We accomplished more than just a migration from Okta to Azure AD; Microsoft helped us streamline our IT services and provided us with direction for future improvements.


 


Learn more


I hope Gogo’s story of undertaking a daunting migration during tough times serves as inspiration for your organization. To learn more about our customers’ experiences, take a look at the other stories in the “Voice of the Customer” series.


 


 


Learn more about Microsoft identity:


Exim Releases Security Update

This article is contributed. See the original author and article here.

Exim has released a security update to address multiple vulnerabilities in Exim versions prior to 4.94.2. A remote attacker could exploit some of these vulnerabilities to take control of an affected system.

CISA encourages users and administrators to review the Exim 4.94.2 update page and apply the necessary update. CISA also encourages users and administrators to review Center for Internet Security Advisory 2021-064 for more information.