This article is contributed. See the original author and article here.


In this post we’ll demonstrate how we can use the NVIDIA® Jetson Nano™ device running AI on IoT edge combined with power of Azure platform to create an end-to-end AI on edge solution. We are going to use a custom AI model that is developed using NVIDIA® Jetson Nano™ device, but you can use any AI model that fits your needs. We will see how we can leverage the new Azure SDKs to create a complete Azure solution.

This post has been divided into three sections. The Architecture overview section discusses the overall architecture at a high-level. The Authentication Front-end section discusses the starting and ending points of the system flow. The Running AI on the Edge section talks about details on how the NVIDIA® Jetson Nano™ as an IoT Edge device can run AI and leverage the Azure SDK to communicate with the Azure Platform.


Architecture overview

There are two main components of the architecture:

  • The Authentication Front-end and AI on the edge run by device side code. The Authentication Front-end is responsible for creating a request, which is added to an Azure Storage Queue.

  • The device side code is running Python code that is constantly listening to Azure Storage Queue for new requests. It picks up the requests and runs AI on it according to the requests. Once the device side code detects the object, it captures the image of the detected object and posts the captured image to Azure Storage Blob.

The underlying core of the architecture is the use of te new Azure SDKs by the Authentication Front-end and the AI running on the edge. This is done by adding requests to Azure Storage Queue by Authentication Front-end for AI processing and updating the Azure Storage Blob with captured image by the device side Python code.


Control flow




At a high-level, the following actions are taking place:

  1. Authentication Front-end initiates flow by completing the first factor authentication. Once first factor authentication is complete the flow is passed to NVIDIA® Jetson Nano™ device.

  2. NVIDIA® Jetson Nano™ device runs custom AI model using code mentioned in following sections. The result of this step is completion of the second factor of authentication.

The control is passed back to Authentication Front-end which validates the results that came from NVIDIA® Jetson Nano™ device.


Authentication Front-end

The role of the Authentication Front-end is to initiate the two-factor flow and interact with Azure using the new Azure SDKs.

Code running on Authentication Front-end

The code running on Authentication Front-end is mainly comprised of two controllers.

The following describes the code for each of those controllers.


The SendMessageController.cs’s main job is to complete the first factor of the authentication. The code simulates the completion of the first factor by just ensuring that the username and passwords are the same. In a real world implementation, this should be done by a valid secure authentication mechanism. An example of how to implement secure authentication mechanism is mentioned in the article Authentication and authorization in Azure App Service and Azure Functions. The second task that SendMessageController.cs is doing is to queue the messages up for the second factor. This is done using the new Azure SDKs.

Here is the code snippet for SendMesssageController.cs:



        public IActionResult Index()
            string userName = string.Empty;
            string password = string.Empty;

            if (!string.IsNullOrEmpty(Request.Form["userName"]))
                userName = Request.Form["userName"];

            if (!string.IsNullOrEmpty(Request.Form["password"]))
                password = Request.Form["password"];

            // Simulation of first factor authentication presented here.
            // For real world example visit:
            if(!userName.Equals(password, StringComparison.InvariantCultureIgnoreCase))
                return View(null);

            var objectClassificationModel = new ObjectClassificationModel()
                ClassName = userName,
                RequestId = Guid.NewGuid(),
                ThresholdPercentage = 70

            _ = QueueMessageAsync(objectClassificationModel, storageConnectionString);

            return View(objectClassificationModel);

        public static async Task QueueMessageAsync(ObjectClassificationModel objectClassificationModel, string storageConnectionString)
            string requestContent = $"{objectClassificationModel.RequestId}|{objectClassificationModel.ClassName}|{objectClassificationModel.ThresholdPercentage.ToString()}";

            // Instantiate a QueueClient which will be used to create and manipulate the queue
            QueueClient queueClient = new QueueClient(storageConnectionString, queueName);

            // Create the queue
            var createdResponse = await queueClient.CreateIfNotExistsAsync();
            if (createdResponse != null)
                Console.WriteLine($"Queue created: '{queueClient.Name}'");
            await queueClient.SendMessageAsync(requestContent);




In the code snippet mentioned above, the code is simulating the first factor by comparing username and password. After the simulation of first factor, the code is sending a message to an Azure Storage Queue using the new Azure SDK.


The ObjectClassificationController.cs is initiated after the custom code on AI at the Edge has completed. The code validates if the request has been completed by the NVIDIA® Jetson Nano™ device and then shows the resultant captured image of the detected object.

Here is the code snippet:



public IActionResult Index(string requestId, string className)
            string imageUri = string.Empty;
            Guid requestGuid = default(Guid);
            if (Guid.TryParse(requestId, out requestGuid))
                BlobContainerClient blobContainerClient = new BlobContainerClient(storageConnectionString, containerName);
                foreach (BlobItem blobItem in blobContainerClient.GetBlobs(BlobTraits.All))
                    if (string.Equals(blobItem?.Name, $"{requestId}/{imageWithDetection}", StringComparison.InvariantCultureIgnoreCase))
                        imageUri = $"{blobContainerClient.Uri.AbsoluteUri}/{blobItem.Name}";

                ObjectClassificationModel objectClassificationModel = new ObjectClassificationModel()
                    ImageUri = new Uri(imageUri),
                    RequestId = requestGuid,
                    ClassName = className

                return View(objectClassificationModel);

            return View(null);

        public async Task<IActionResult> HasImageUploaded(string imageContainerGuid)
            BlobContainerClient blobContainerClient = new BlobContainerClient(storageConnectionString, "jetson-nano-object-classification-responses");
            await foreach(BlobItem blobItem in blobContainerClient.GetBlobsAsync(BlobTraits.All))
                if (string.Equals(blobItem?.Name, $"{imageContainerGuid}/{imageWithDetection}", StringComparison.InvariantCultureIgnoreCase))
                    return new Json($"{blobContainerClient.Uri.AbsoluteUri}/{blobItem.Name}");
            return new Json(string.Empty);




The above mentioned code snippet shows two methods that are using the new Azure SDK. The HasImageUploaded method queries the Azure Storage Blob to find if the image has been uploaded or not. The Index method simply gets the image reference from Azure Storage Blob. For more information on how to read Azure Blob Storage using the new Azure SDK visit Quickstart: Azure Blob Storage client library v12 for .NET.

The following steps are taken on the Authentication Front-end:

  1. User initiates login by supplying username and password.

  2. User is authenticated on the first factor using the combination of username and password.

  3. On successful completion of the first factor, the web interface creates a request and sends that to Azure Storage as shown below:mnabeel_2-1614887468147.png




  4. The NVIDIA® Jetson Nano™ device, which is listening to Azure Storage Queue, initiates the second factor and completes the second factor.

  5. Once the second factor is completed, the NVIDIA® Jetson Nano™ device posts the captured image for the second factor to Azure Storage Blob as shown below:mnabeel_4-1614887468162.png


The web interface shows the captured image, completing the flow as shown below:



Running AI on the Edge


Device pre-requisites

  1. NVIDIA® Jetson Nano™ device with camera attached to capture video image.

  2. Custom pre-training model deployed on the device.

  3. Location path to the custom model file (.onnx file). This information is presented as –model parameter to the command mentioned in Steps section. For this tutorial we have prepared a custom model and saved as “~/gil_background_hulk/resenet18.onnx”.

  4. Location path to the classification text file (labels.txt). This information is presented as –labels parameter to the command mentioned in Steps section.

  5. Class name of the object that is target object that needs to be detected. This is presented as –classNameForTargetObject.

  6. Azure IoT Hub libraries for Python. Install the azure-iot-device package for IoTHubDeviceClient.



pip3 install azure-iot-device




Code running AI on the Edge

If we look at the technical specifications for NVIDIA® Jetson Nano™ device, we will notice that it is based on ARM architecture running Ubuntu (in my case it was release: 18.04 LTS). With that knowledge it became clear that Python would be good choice of language running at device side. The device side code is shown below:




import jetson.inference
import jetson.utils

import argparse
import sys

import os
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient
from import QueueClient
from import BlobServiceClient, BlobClient, ContainerClient

# A helper class to support async blob and queue actions.
class StorageHelperAsync:
    async def block_blob_upload_async(self, upload_path, savedFile):
        blob_service_client = BlobServiceClient.from_connection_string(
        container_name = "jetson-nano-object-classification-responses"

        async with blob_service_client:
            # Instantiate a new ContainerClient
            container_client = blob_service_client.get_container_client(container_name)

            # Instantiate a new BlobClient
            blob_client = container_client.get_blob_client(blob=upload_path)

            # Upload content to block blob
            with open(savedFile, "rb") as data:
                await blob_client.upload_blob(data)
                # [END upload_a_blob]

    # Code for listening to Storage queue
    async def queue_receive_message_async(self):
        # from import QueueClient
        queue_client = QueueClient.from_connection_string(

        async with queue_client:
            response = queue_client.receive_messages(messages_per_page=1)
            async for message in response:
                queue_message = message
                await queue_client.delete_message(message)
                return queue_message

async def main():

    # Code for object detection
    # parse the command line
    parser = argparse.ArgumentParser(
        description="Classifying an object from a live camera feed and once successfully classified a message is sent to Azure IoT Hub",
        "input_URI", type=str, default="", nargs="?", help="URI of the input stream"
        "output_URI", type=str, default="", nargs="?", help="URI of the output stream"
        help="Pre-trained model to load (see below for options)",
        help="Index of the MIPI CSI camera to use (e.g. CSI camera 0)nor for VL42 cameras, the /dev/video device to use.nby default, MIPI CSI camera 0 will be used.",
        help="Desired width of camera stream (default is 1280 pixels)",
        help="Desired height of camera stream (default is 720 pixels)",
        help="Class name of the object that is required to be detected. Once object is detected and threshhold limit has crossed, the message would be sent to Azure IoT Hub",
        help="The threshold value 'in percentage' for object detection",

        opt = parser.parse_known_args()[0]

    # load the recognition network
    net = jetson.inference.imageNet(, sys.argv)

    # create the camera and display
    font = jetson.utils.cudaFont()
    camera = jetson.utils.gstCamera(opt.width, opt.height,
    display = jetson.utils.glDisplay()
    input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)

    # Fetch the connection string from an environment variable
    conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")

    device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
    await device_client.connect()

    counter = 1
    still_looking = True
    # process frames until user exits
    while still_looking:
        storage_helper = StorageHelperAsync()
        queue_message = await storage_helper.queue_receive_message_async()

        print("Waiting for request queue_messages")
        if queue_message:
            has_new_message = True
            queue_message_array = queue_message.content.split("|")
            request_content = queue_message.content
            correlation_id = queue_message_array[0]
            class_for_object_detection = queue_message_array[1]
            threshold_for_object_detection = int(queue_message_array[2])

            while has_new_message:
                # capture the image
                # img, width, height = camera.CaptureRGBA()
                img = input.Capture()

                # classify the image
                class_idx, confidence = net.Classify(img)

                # find the object description
                class_desc = net.GetClassDesc(class_idx)

                # overlay the result on the image
                    "{:05.2f}% {:s}".format(confidence * 100, class_desc),

                # render the image
                display.RenderOnce(img, img.width, img.height)

                # update the title bar
                    "{:s} | Network {:.0f} FPS | Looking for {:s}".format(

                # print out performance info
                if (
                    class_desc == class_for_object_detection
                    and (confidence * 100) >= threshold_for_object_detection
                    message = request_content + "|" + str(confidence * 100)
                        "Found {:s} at {:05.2f}% confidence".format(
                            class_desc, confidence * 100
                    display.RenderOnce(img, img.width, img.height)
                    savedFile = "imageWithDetection.jpg"
                    jetson.utils.saveImageRGBA(savedFile, img, img.width, img.height)

                    # Create the BlobServiceClient object which will be used to create a container client
                    blob_service_client = BlobServiceClient.from_connection_string(
                    container_name = "jetson-nano-object-classification-responses"

                    # Create a blob client using the local file name as the name for the blob
                    folderMark = "/"
                    upload_path = folderMark.join([correlation_id, savedFile])

                    await storage_helper.block_blob_upload_async(upload_path, savedFile)

                    await device_client.send_message(message)
                    still_looking = True
                    has_new_message = False

    await device_client.disconnect()

if __name__ == "__main__":
    loop = asyncio.get_event_loop()




Here is the link for code to try out:

Here is the command line to execute the code:



export DATASET=~/IoT/datasets/gil_background_hulk

python3 --model=gil_background_hulk/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=$DATASET/labels.txt --camera=csi://0 --classNameForTargetObject=hulk --detectionThreshold=95




Code flow



The following actions take place in python code running on the device side:

  1. The device code is constantly reading the request coming to Azure Storage Queue.

  2. Once request is received; code extracts out which object to detect and what threshold to use for object detection. The example mentioned in the diagram shows the message as: 0000-0000-0000-0000|hulk|80. The code will extract “hulk” as the object that needs to be detected and “80” as threshold value. This format is just an example that is used to provide input values to device side code.

  3. Using the custom AI model (example: ~/gil_background_hulk/resenet18.onnx) running on Jetson Nano device, object is searched based on the request.

  4. As soon as object is detected, the python code running on Jetson Nano device posts captured image to Azure Blob storage.

  5. In addition, the code running on Jetson Nano device sends message to Azure IoT hub informing of correct match for the request.

Once the device side code completes the flow, the object detected image is posted to Azure Storage blob along with posting to Azure IoT Hub. The web interface then takes control and completes the rest of the steps.



In this post we have seen how simple it is for running AI on edge using NVIDIA® Jetson Nano™ device leveraging Azure platform. Azure SDKs are designed to work great with a python on Linux based IoT devices. We have also seen how Azure SDK plays the role of stitching different components together for a complete end to end solution.




Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

%d bloggers like this: