This article is contributed. See the original author and article here.

We just released new features and capabilities to the Microsoft Live Video Analytics (LVA) service and if you are thinking about Live Video Analytics (LVA) on a Windows IoT device, an Azure Percept DK (dev kit), or on other edge devices powered by AI acceleration from NVIDIA and Intel, then you will really want to learn more about it! Organizations can now drive the next wave of business automation via AI-powered, real-time analytic insights from their own video streams with Microsoft Live Video Analytics (LVA).


 


In line with Microsoft’s vision of simplifying AI and IoT at the edge from silicon to service, the new features and capabilities we announced at the Microsoft Ignite 2021 event will allow you to deploy LVA capabilities seamlessly on Windows IoT devices, for you to build intelligent video analytics systems leveraging and capitalizing on your Windows expertise and investments. We have also ensured that LVA functions on the new family of Azure Percept devices and works seamlessly across our partner platforms such as Intel and NVIDIA.


 


With our focus on ensuring a consistent experience for video analytics solutions developers, irrespective of the OS and of underlying hardware acceleration platform, here are the new capabilities that help complete your end-to-end scenarios:


 



  • Deploy LVA with Azure IoT Edge for Linux on Windows (EFLOW) : Leverage LVA to build and deploy Video Analytics workflows on Windows IoT devices with EFLOW.

  • LVA with Azure Percept: At Ignite 2021, we announced Azure Percept, an end-to-end platform for creating edge AI solutions in minutes with hardware accelerators built to integrate seamlessly with Azure AI and Azure IoT services. LVA can be leveraged on Percept to record and stream videos from edge to cloud to help you deliver business insights in real time.

  • Intel OpenVINO DL Streamer – Edge AI Extension with LVA: With the latest release of OpenVINO’s DL Streamer – Edge AI Extension from Intel, you can leverage it alongside LVA to detect, classify, and track multiple object classes (e.g., person, vehicle, bike) at high efficiency on a variety of Intel HW architectures

  • NVIDIA DeepStream — AI Skills and AI Acceleration for LVA: With the latest DeepStream release (5.1), you can now deploy LVA across multiple cameras for  object detection, classification and tracking on NVIDIA GPUs.


Since the preview launch of the Live Video Analytics (LVA) platform on June 2020, we evolved product capabilities and strengthened our platform to meet partner and customers’ needs in the version 2.0 refresh announced in Feb 2021 and related announcements. Additionally, we have a set of exciting capabilities that are not in the public domain yet, but we are getting ready to announce them soon at Build 2021. Please reach out to us (amshelp@microsoft.com) to learn more.


 


Leverage Windows edge devices as LVA processors


 


As a customer in industries like Manufacturing, Retail, Public Safety etc. you may have many Windows devices that are enabled as IoT sensors and processing devices. Along with Windows IoT, there is also a growing trend of Linux based containerized microservices backed by cloud-based ISV ecosystem especially for video analytics in real time. Many customers we talk to want to leverage their existing assets, be it cameras, Windows IoT devices or other IoT sensors to derive real time business intelligence by applying AI to video.


 


Using LVA on EFLOW you get the best of both worlds – a Windows IoT device that leverages existing Windows tooling, infrastructure investment and IT knowledge, Azure managed and deployed as well as gathering business insights via Linux based Live Video Analytics. At Ignite 2021, we delivered a set of simple steps, that can help you bring LVA and EFLOW together and unleash the power of LVA’s media graph on Windows IoT Edge devices.


eflow.png


 


As an example, you could be a retail store owner with cameras and network video recorders powered by Windows IoT and today the video might be archived and manually reviewed. With LVA and EFLOW, the operator can easily deploy Linux-based Azure Live Video Analytics on Windows, leveraging their existing Windows expertise and investments and could go from having a basic video recording system to an intelligent video analytics solution that can trigger actions driven by AI. You can also learn more about EFLOW, currently in Public preview about its features and deployments.


 


Live Video Analytics with Azure Percept


 


At Ignite 2021, our leadership team has announced Azure Percept that focuses on extending AI to the edge with an end-to-end platform that integrates Intel Movidius Myriad X vision processing unit (VPU) hardware accelerators with Azure AI and Azure IoT services and is designed to be simple to use and ready to go with minimal setup.


 


Percept helps customers overcome one of the key challenges of navigating the end-to-end edge AI solution creation. As a solution builder, you might already have a working AI model that you want to leverage as part of an end-to-end video analytics solution. We have partnered with the Azure Percept team to provide you with a reference solution. You can get started today by ordering your dev kit and leveraging the GitHub code.


 


As seen from the reference solution’s architecture below, Azure Percept leverages LVA to record video to the cloud, so that when combined with analytics metadata from the AI, you get a solution for object counting in pre-defined zones. You can visualize the results thanks to video streaming and playback capabilities of LVA.


azure-percept-device.png


 


 


LVA with Intel’s OpenVINO DL Streamer – AI Edge Extension


 


Last year, we announced an integration of LVA with Intel’s OpenVINO Model Server –Edge AI Extension module via LVA’s HTTP extension processor. This enabled our customers to run AI inferences such as object detection and classification on a variety of Intel hardware architectures (CPU, iGPU, VPU) at the edge and use cloud services like Azure Media Services and Azure IoT. At Ignite 2021, with the announcement of the OpenVINO DL Streamer – AI Edge Extension module, we are enabling additional capabilities over a highly performant gRPC extension processor while keeping the core OpenVINO inference engine the same to scale across the Intel architectures. With this integration you can now get object detection, classification and tracking for high frame rate video across multiple classes. See this tutorial for more details.


 


With the pre-validated configurations, pre-trained models as well as scalable hardware, users can jump start solutions to improve business efficiencies across variety of use cases such as retail, industrial, healthcare or smart cities. For example, with the vehicle classification model you can see the type of vehicle and add your own business logic i.e., validate certain vehicle types are parked in the designated area. With the object tracker you can track objects of interest and map on a timeline.


 


Get Started Today!



  • Deploy LVA with Intel DL Streamer – Edge AI Extension using this tutorial

  • Explore and deploy Intel DL Streamer – Edge AI Extension Module from Azure Marketplace

  • Watch the Intel Ignite 2021 session


 


gRPC-media-graph-extended.png


 


LVA with NVIDIA’s DeepStream SDK – AI Skills and AI Acceleration


 


LVA and NVIDIA DeepStream SDK can be used to build hardware-accelerated AI video analytics apps that combine the power of NVIDIA graphic processing units (GPUs) with Azure cloud services, such as Azure Media Services, Azure Storage, Azure IoT, and more.


 


NVIDIA recently released DeepStream SDK 5.1, bringing support for NVIDIA’s Ampere architecture GPUs for massive acceleration to inference.  With this release, you can leverage LVA to build video workflows that span the edge and cloud, and then combine DeepStream SDK 5.1 to build pipelines to extract insights from video.


 


 


topology_nvidia_deepstream.png


 


Imagine you work for a county or city government that wants to understand traffic patterns across certain times, a retailer that wants to deliver curbside pickup to certain vehicle types, or a parking lot operator that wants to understand parking lot utilization, traffic flows and monitor in real time. With LVA managing video workflows and NVIDIA DeepStream’s investment in providing optimized AI for their underlying hardware architecture combined with the power of the Azure platform, you can now develop such video analytics pipelines from cloud to edge.


 


You can explore some samples on GitHub that showcase the composability of both platforms and have been tested for vehicle detection, classification, and tracking on high frame rate video. Feel free to add additional object classes such as bicycle, road sign etc. to leverage detection and tracking capability.


Get Started Today!


 


In closing, we’d like to thank everyone who is already participating in the Live Video Analytics on IoT Edge public preview. For those of you who are new to our technology, we encourage you to get started today with these helpful resources:



And finally, the LVA product team wants to hear about your experiences with LVA. Please feel free to contact us via TechCommunity  to ask questions and provide feedback including what future scenarios you would like to see us focusing on.


 


**Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.