This article is contributed. See the original author and article here.

HLS19_developer1Hologram_001.jpg


 


Mixed reality enables immersive capabilities that merge the digital and the physical worlds. With Azure mixed reality services, developers can create experiences that understand people, places, and things in their environment. These services consist of Azure Spatial Anchors, Azure Remote Rendering, and Azure Object Anchors. Azure Spatial Anchors is generally available and enables multi-user, multi-device, spatially aware mixed reality experiences. At Ignite 2021, we announced the general availability of Azure Remote Rendering and preview of Azure Object Anchors.


 


Azure Remote Rendering is now generally available


Mixed reality use cases require users to view and interact with 3D models. Models are decimated and simplified to enable visualizations through target devices – mixed reality headsets or mobile devices. The result is the loss of important detail which is necessary in design reviews and for making business decisions. Azure Remote Rendering enables developers to render high-quality, interactive 3D content and stream it to a HoloLens 2 device in real time. Remote Rendering leverages the power of the Azure cloud to enable visualizations without decimation for the most complex models. Developers can render models up to 20 million polygons with Standard Remote Rendering and over a billion or more polygons with Premium Remote Rendering.


rapete_1-1614876305854.png


How does the service work?


Azure Remote Rendering moves the workload to high-end GPUs in the cloud. A cloud-hosted graphics engine renders the image, encodes it as a video stream, and streams that to the target mixed reality device. In addition, Remote Rendering supports hybrid rendering where local content like menus are automatically combined with remote objects for an immersive experience.


 


Get started today with this learning module. For more information, review our documentation.


 


Customer adoption


It has been great to see how customers and partners are leveraging Remote Rendering to enable 3D visualizations with high fidelity. Bentley Systems has reimagined the bridge inspection process using HoloLens and Remote Rendering. Every detail matters when performing structural inspections. Dan Vogen summarizes it well:  



“We need to display, and interact with, hundreds of millions of polygons in reality meshes. 
Azure Remote Rendering helps us do this and more – it’s like sci-fi interaction with infrastructure assets. The resolution, detail, and accuracy “teleport” users to an asset. This cuts time spent, lowers costs, improves safety, and reduces traffic disruptions thanks to our HoloLens-based immersive inspection processes.”


Dan Vogen, Vice President, Transportation Asset Management, Bentley Systems


 



 


HoloLab is building an application to enable visualization of architectural layouts at scale and with precision using Azure Remote Rendering.


 


“We’re excited about the innovation and efficiency of Azure Remote Rendering. We work with enterprises in manufacturing, architecture, engineering, and construction. They use Remote Rendering for visualization, and sharing, of high polygon, 3D CAD data. The BIM construction data used to be so large that you could only display one floor at a time. Now, thanks to Remote Rendering, HoloLens shows an entire building at once.”


Kaoru Nakamura, Co-Founder & CEO, HoloLab Inc.


 


To provide our global partners and customers a choice of regions for their Remote Rendering solutions we’ve expanded the availability of Remote Rendering to 10 regions including Australia East, East US 2, Japan East, North Europe, South Central US, and UK South, East US, West US 2, West Europe, and Southeast Asia.


 


Azure Object Anchors is now in public preview


Azure Object Anchors is innovation based on customer and partner feedback. Adoption of mixed reality technologies has accelerated and customers want to easily align holograms to physical objects. Enter Azure Object Anchors. Object Anchors enables developers to automatically align and anchor 3D content to objects in the physical world. This eliminates the need for markers and manual holographic alignment.


 


During the private preview, we worked closely with Toyota. Toyota is leveraging Object Anchors so that its technicians can easily align holographic wiring content with car engine wiring. As soon as the technicians align objects, they can walk up, scan, and begin work. This saves time and reduces errors.


 


“Azure Object Anchors enables our technicians to service vehicles more quickly and accurately thanks to markerless and dynamic 3D model alignment. It has removed our need for QR codes, and eliminated the risk of error from manual model alignment, thus making our maintenance procedures more efficient.


Koichi Kayano, Project Manager Technical Service Division at Toyota 


 


We’re working with other partners as well as internal teams including D365 Guides. It has integrated Object Anchors in private preview and is hearing positive feedback from customers. Whether you’re building a custom mixed reality application or an industry specific solution, developers can use Object Anchors in scenarios such as task guidance, step-by-step training, and virtual inspections.


 


The early feedback from our customers has led to many improvements in our latest SDK, documentation, and samples. We’re excited to expand this broadly with Azure Object Anchors entering public preview.


 


How does this service work?


The Azure Object Anchors workflow encompasses two main steps: training and runtime.


 


Training


The Object Anchors service converts 3D assets into AI models that enable object-aware mixed reality experiences. For the physical object to which you’d like to align content, you need a 3D model of it in one of our supported formats (gltf, glb, obj, fbx, ply). The service ingests the model and runs it through our training pipeline. Once completed, it outputs the information you need to leverage detection and alignment in our runtime.


 


Runtime


To create a runtime detection experience:



  1. Start a session

  2. Load object model (output from training)

  3. Set search area

  4. Detect physical object and align content

  5. Lock alignment

  6. Render 3D content


It’s easy to get started with Object Anchors sample applications leveraging Unity and MRTK. You can also write custom code using our runtime SDK.


 


Learn more about Object Anchors here. To get started with existing 3D models see this.


 


Azure mixed reality services have momentum. In only two years, our customers and partners have built immersive mixed reality solutions to enable technicians, educators, students, and professionals to achieve more. We can’t wait to see what you build next. Get started today with Azure mixed reality services.


 


Credits: Archana Iyer, Jonathan Lyons, Armin Rezaiean-Asel


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.