Azure Marketplace new offers – Volume 177

Azure Marketplace new offers – Volume 177

This article is contributed. See the original author and article here.











We continue to expand the Azure Marketplace ecosystem. For this volume, 115 new offers successfully met the onboarding criteria and went live. See details of the new offers below:































































































































































































































































































































































































































Get it now in our marketplace


Accelario DataOps Platform for MySQL databases.png

Accelario DataOps Platform for MySQL Databases: Accelario DataOps Platform for MySQL Databases accelerates and automates self-service provisioning and data refreshes from on-premises to the cloud and vice versa with minimal downtime. Speed up your go-to-market and application delivery without sacrificing performance.


Accelario DataOps Platform for PostgreSQL database.png

Accelario DataOps Platform for PostgreSQL Databases: Accelerate your application development and cloud migration with Accelario DataOps Platform for PostgreSQL Databases. This self-service platform streamlines database copy and speeds up DevOps pipelines. Reduce wait time for any type of test data to minutes. 


ARGOS CSPM - Contextual Cloud Security Monitoring.png

ARGOS – Contextual Cloud Security Monitoring: Get a precise picture of your cloud security posture with ARGOS’ end-to-end security service. Tools like resource graphs and exploitability checks enable real-time detection and remediation so you can focus on securely deploying applications with speed and efficiency.


Bugzilla Issue Tracker.png

Bugzilla Issue Tracker: Bugzilla Issue Tracker on Ubuntu Server 20.04 is an open-source bug tracking solution that enables users to stay connected with their clients or employees while keeping track of outstanding bugs and issues throughout the software development life cycle.


Cloud Membership Management System.png

Cloud Membership Management System: Caloudi’s AI-powered cloud membership management platform helps consolidate and manage all information related to your retail customer on a single platform. Personalize your client’s shopping experience with targeted messaging and promotions.


DesktopReady for MSP.png

DesktopReady for MSP: DesktopReady for MSP is a Microsoft Azure Virtual Desktop automation platform that enables managed service providers to deliver Windows 10 desktops on Azure. Set up your modern workspace for improved agility and ongoing cost savings.


EcoVadis Sustainability Ratings.png

EcoVadis Sustainability Ratings: Integrate sustainability in your procurement policy, processes, and tools with EcoVadis Sustainability Ratings. This scalable SaaS solution helps screen and assess suppliers’ sustainability performance. This offer is for existing EcoVadis customers only.


MediaValet Digital Asset Management.png

MediaValet Digital Asset Management: This secure, cost-effective digital asset management platform seamlessly integrates with your existing tools and allows marketing teams to create, organize, and distribute high-value digital assets across teams, departments, and partners. 


Metabase on Debian.png

Metabase on Debian: Powered by Niles partners, Metabase is a business intelligence and data visualization tool with SQL capabilities. It offers a simple graphical interface to power in-application analytics without writing any SQL.


Prescript EMR.png

Prescript EMR: Prescript EMR is an integrated healthcare platform for optimizing patient care in both urban and rural settings. Powered by a clinical management system it offers a single portal for administering appointments, electronic medical records (EMR), billing, and more.


Procore Sharepoint Integration.png

Procore SharePoint IntegrationSyncEzy’s offering allows you to access Procore photos and documents within Microsoft SharePoint and discuss site documents with your construction crew from a central cloud location. Access work files within Microsoft Teams and more.


QuerySurge.png

QuerySurge: Continuously detect data issues in your delivery pipeline with this automated data testing solution from RTTS. QuerySurge optimizes your critical data by integrating Microsoft Power BI analytics in your DataOps pipeline and improves ROI.


Revenue Cycle Management.png

Revenue Cycle Management: Inforich’s solution leverages AI, automation, and analytics to streamline the insurance and clinical aspects of healthcare by linking administrative, insurance, and other financial information with the patient’s treatment plan.


Scheduling as a Service.png

Scheduling as a Service: This employee management scheduling software creates custom scheduling templates using AI to optimize resources and workforce requirements. Improve your field services and control costs by assigning employees based on total labor expense.


Seascape for Notes.png

Seascape for Notes: Seascape for Notes is an archiving solution for Lotus Notes (HCL Notes and Domino). It enables administrators to archive entire Notes applications, mail files, and other custom databases using a streamlined archiving process.


ShiftLeft CORE.png

ShiftLeft CORE: ShiftLeft CORE uses rapid, repeatable static application security testing to help developers fix 91% of new vulnerabilities within the code they are working on in two discovery sprints. Release secure code at scale with this easy-to-use SaaS platform.


Symend.png

Symend: Symend’s relationship-based approach uses behavioral science and analytics to empower customers to resolve past due bills before they reach collections. Determine which strategies will empathetically help customers while lowering your operating costs.


Vitals KPI Management for Healthcare.png

Vitals KPI Management for Healthcare: Vitals KPIM uses artificial intelligence and analytics to improve healthcare services by creating success metrics that align patient satisfaction, business processes and team collaboration. This solution is only available in Chinese.


WhiteSource Open Source Security Management.png

WhiteSource Open Source Security Management: WhiteSource Open Source Security Management offers an agile open source security and license compliance management solution that makes it easy to develop secure software without compromising speed or agility.



Go further with workshops, proofs of concept, and implementations


AKS Container Platform Build-16-Week Implementation.png

AKS Container Platform Build: 16-Week Implementation: In this engagement, BlakYaks will implement a secure and scalable Microsoft Azure Kubernetes Service (AKS) platform built with code for hosting container workloads at scale on Microsoft Azure. 


AKS Container Platform Design- 8-Week Implementation.png

AKS Container Platform Design: 8-Week Implementation: Utilizing enterprise-grade designs, patterns, and operational frameworks, BlakYaks will provide a comprehensive design engagement for a secure and compliant Microsoft Azure Kubernetes Service platform.


Azure IoT Jumpstart Kit- 1-Day Implementation Workshop .png

Azure IoT Jumpstart Kit: 1-Day Implementation Workshop: ACP IT Solutions will help connect your industrial sensors, machines, and production processes with Azure IoT using cost-effective, ready-made retrofitting product bundles. This offer is only available in German.


Azure Managed Services-12-Month Implementation.png

Azure Managed Services: 12-Month ImplementationCoreBTS’ custom Microsoft Azure managed service will help cost-optimize your business processes by enabling your teams to focus on strategic tasks rather than day-to-day operations.


Azure Migration- 4-Week Implementation.png

Azure Migration: 4-Week ImplementationIn this collaborative engagement, MNP Digital will seamlessly migrate your servers to Microsoft Azure to optimize your cloud usage and ensure a sustainable foundation, structure, governance, and security for your digital transformation.


Azure Purview Foundations- 3-Week Implementation.png

Azure Purview Foundations: 3-Week Implementation: Coretek will help you create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage with Azure Purview Foundations.


Azure Real-Time IoT Data Analytics- 20-Day Proof of Concept.png

Azure Real-Time IoT Data Analytics: 20-Day Proof of Concept: ScienceSoft’s proof of concept is designed to help companies get real-time visibility into operational processes and enable intelligent automation using Azure IoT Hub, Azure Stream Analytics and Microsoft Power BI.


Azure Sentinel Onboarding- 2-Week Proof of Concept.png

Azure Sentinel Onboarding: 2-Week Proof of Concept: Enhance your organization’s threat detection and response capabilities in this proof of concept. The experts from Stripe OLT Consulting will help your organization modernize its security operation by onboarding Microsoft Azure Sentinel into your own tenant.


ECF Data Azure Sentinel- 2-Day Workshop.png

Azure Sentinel: 2-Day Workshop: In this workshop you will partner with ECF Data to modernize your security operation and capture threat intelligence using Microsoft Azure Sentinel and move your organization’s defenses from a reactive state to a proactive one.


Azure Services‎- 1-Week Implementation.png

Azure Services‎: 1-Week Implementation: Manapro Consultants will demonstrate how Microsoft Azure services can transform your applications and lower operational costs as you lay the foundations for your cloud migration journey. This offer is available only in Spanish.


Azure Site Recovery and Backup- 3-Week Proof of Concept.png

Azure Site Recovery and Backup: 3-Week Proof of Concept: Using your existing on-premises and/or cloud servers, Insight will guide you through the concepts of cloud backup and disaster recovery and configure a working prototype. Learn how Azure Site Recovery can simplify and reduce the cost of your disaster recovery solution.


Azure Site Reliability Engineering (Managed Service)- 12-Month Implementation.png

Azure Site Reliability Engineering (Managed Service): 12-Month Implementation: BlakYaks will create and implement a custom managed service for all your Microsoft Azure hosted platforms to keep them up-to-date and aligned to your strategic requirements. Cost-optimize your business processes and site reliability engineering operations.


Windows Virtual Desktop on Azure- 2-Week Proof of Concept.png

Azure Virtual Desktop: 2-Week Proof of Concept: Insight’s proof of concept will give you the foundational knowledge to configure a secure, scalable, virtual desktop infrastructure using Microsoft Azure Virtual Desktop. Empower your employees with a flexible work environment.


Azure Virtual Desktop- 5-Week Implementation.png

Azure Virtual Desktop: 5-Week Implementation: Is your organization struggling with transitioning to remote work? 3Cloud will deliver an Azure Virtual Desktop deployment tailored to meet your operational and security needs. You’ll learn about various deployment scenarios and how to enable remote work for your organization.


Azure Virtual Desktop & Windows 365 Managed Services.png

Azure Virtual Desktop & Windows 365 Managed Services: The experts from Cubesys will develop a virtualized desktop strategy that includes a roadmap and cost-benefit analysis for an enterprise-wide implementation of Microsoft Azure Virtual Desktop and Windows 365. Learn how you can access your desktop and apps from anywhere.


Backup as a Service- 12-Month Implementation.png

Backup as a Service: 12-Month Implementation: Using Microsoft Azure and Commvault, Databarracks’ implementation will proactively resolve any security issues and help your organization monitor, manage, and restore backups so your critical data is always protected.


Cloud-Native Consulting- 2-Week Implementation.png

Cloud-Native Consulting: 2-Week Implementation: Alerant engineers will develop a wholistic understanding of your business needs before creating a roadmap to discover, plan, and develop cloud-native solutions to speed up your enterprise’s digital transformation.


Data Innovation Studio- 2-Week Workshop.png

Data Innovation Studio: 2-Week WorkshopIn this innovation workshop, the experts from Data#3 will help your organization further its analytics and AI capabilities by delivering a tailored roadmap using a modern data platform reference architecture for Azure services.


Data Lake for Mortgage Servicing- 4-Week Implementation.png

Data Lake for Mortgage Servicing: 4-Week Implementation: Invati will simplify and reduce loan servicing costs and improve business insights by mapping your team’s mortgage data sources to a single point of access using Azure Data Lake and Azure Synapse Analytics.


Data Quality & MDM - 4-Week Proof of Concept.png

Data Quality & MDM: 4-Week Proof of Concept: Using machine learning models built on Microsoft Azure components, the experts at Tredence will improve your data by removing duplicates and inconsistent records. Access reliable data for better business insights.


Data Security Protection- 4-Week Workshop.png

Data Security Protection: 4-Week Workshop: Freedom Systems will help you understand your business’s security requirements and leverage Microsoft Enterprise Mobility and Security platform to protect and secure your organization. 


DevOps Assessment-1-Day Workshop.png DevOps Assessment: 1-Day Workshop: Xpirit will leverage their expertise in DevOps and help roll out tooling and methodology as they facilitate your organization’s transition to the cloud. Companies in highly regulated sectors such as defense and finance will benefit from this offer.
Digital Twin Smart Spaces- 3-Month Proof of Concept.png

Digital Twin Smart Spaces: 3-Month Proof of Concept: T-Systems MMS will identify, optimize, or sublet unused space by tracking the digital version of available physical workspaces in real-time via its Smart Spaces platform using battery-less sensors and Azure IoT services.


Disaster Recovery as a Service with Azure-12-Month Implementation.png

Disaster Recovery as a Service with Azure Site Recovery: 12-Month Implementation: Databarracks’ 24/7/365 service is compatible with both Windows and Linux Operating Systems and uses cloud-native solutions like Azure Backup and Azure Site Recovery (ASR) to implement a simple, secure, and cost-effective disaster recovery solution.


Disaster Recovery as a Service- 12-Month Implementation.png

Disaster Recovery as a Service with Zerto: 12-Month Implementation: Databarracks will replicate your on-premises or cloud servers using Zerto Virtual Manager into a Zerto Cloud Appliance hosted in Microsoft Azure. At the point of recovery, Zerto uses Azure queues and Azure virtual machine scale sets to accelerate recovery.


Education diagnostic system- 5-Day Implementation.png

Education Diagnostic System: 5-Day Implementation: In this implementation SiES IT will help set up an appraisal platform to assess the value and quality of educational institutions using Microsoft Azure services. This offer is only available in Russian.


Identity Cleanse- 5-Day Workshop.png

Identity Cleanse: 5-Day Workshop: The experts from ITC Secure will consolidate and reconcile all sources of user and account information to assess your environment and improve the security of your ecosystem using Microsoft Azure Active Directory.


Intelligent Hybrid Cloud Platform Hosting Solution- 3-Week Implementation.png

Intelligent Hybrid Cloud Platform Hosting Solution: 3-Week Implementation: In this offer, Acer AEB will provide a consistent multi-cloud and on-premises management platform with the successful implementation of Azure Stack HCI (hyperconverged infrastructure) architecture and its integration with Azure Arc. This offer is only available in Chinese.


Linux to Azure Migration- 16-Day Workshop.png

Linux to Azure Migration: 16-Day Workshop: SVA consultants will help your organization analyze its existing Linux infrastructure and develop a roadmap and business case to move your servers and applications to Microsoft Azure using Microsoft’s Cloud Adoption Framework (CAF). This offer is only available in German.


Microsoft Azure Migration & Deployment- 3-Month Implementation.png

Microsoft Azure Migration & Deployment: 3-Month Implementation: In this offer, Insight’s specialists will guide you through the adoption and migration of Microsoft Azure and ensure your deployment process is tailored to your organization’s exact business goals and needs.


Migrate to Azure- 15-Day Deployment.png

Migrate to Azure: 15-Day Deployment: Learn how Nebulan’s iterative approach using the Microsoft Cloud Adoption Framework can cost-effectively migrate your top 10 workloads, including Windows and SQL Server, to Microsoft Azure. This offer is only available in Spanish.


Migrate to Azure- 5-Week Implementation.png

Migrate to Azure: 5-Week Implementation: In this offer, Xavor will implement a low-risk, data-driven Microsoft Azure cloud migration solution tailored to your organization’s unique needs. Ensure business continuity and improve performance with governance, automation, and control of multi-cloud environments.


ML Ops Framework Setup- 12-Week Implementation.png

ML Ops Framework Setup: 12-Week Implementation: Tredence’s automated industrialized machine learning operations platform will help you generate higher ROI on your data science investments and offer clear and robust analytical insights with minimal manual effort using Microsoft Azure DevOps.


Modern Data Warehouse- 4-Week Proof of Concept.png

Modern Data Warehouse: 4-Week Proof of Concept: In this proof of concept, devoteam will demonstrate how its solution built on Azure Data Lake, Azure Analytics, and Azure Synapse can transform and modernize your legacy data landscape. 


Modernize with Azure Kubernetes Service- 5-Day Workshop.png

Modernize with Azure Kubernetes Service: 5-Day Workshop: The experts at SVA will lead a hand-on workshop to demonstrate how Azure Kubernetes Service (AKS) can provide an agile developer environment in Microsoft Azure while reducing costs and administrative overhead. This offer is only available in German.


Secure OnMesh- 8-Week Proof of Concept.png

Secure OnMesh: 8-Week Proof of Concept: Make security an intrinsic part of your digital fabric with Logicalis’ Secure OnMesh solution. In this engagement you will learn how Secure OnMesh leverages Microsoft Azure Sentinel to protect your entire digital ecosystem with AI-enabled threat hunting capabilities.


Truveta Data Migration- 3-Month Service.png

Truveta Data Migration: 3-Month ServiceTegria’s service helps healthcare organizations build scalable data pipelines from the cloud to the Truveta healthcare data platform using Microsoft Azure tools like Data Factory and Databricks. Truveta anonymizes and maps patient data and automates file creation.


Windows to Azure Migration- 16-Day Workshop.png

Windows Server to Azure Migration: 16-Day Workshop: SVA will identify and prioritize all your assets located on-prem Windows server environment and help move them to Microsoft Azure using a Cloud Adoption Framework-aligned approach. This offer is only available in German.



Contact our partners



AccessGov



AI and Data Projects: 2-Hour Briefing



Azure Analytics: 5-Day Assessment



Azure Cloud Adoption: 2-Week Assessment



Azure Cloud: 4-Week Assessment



Azure Container Platform Security: 6-Week Assessment



Azure Migration: 3-Day Assessment



Azure Migration Readiness: 2-Week Assessment



Azure Session: 2-Hour Initial Assessment



Azure Virtual Desktop Services



C-Track Comprehensive Court Case Management Solution



Cisco Integrated System for Microsoft Azure Stack



Cloudera Data Platform 7.2.x Runtimes



CloudXR Introductory Offer – Windows Server 2019



Cryptographic Risk Assessment



Cyber Care – Managed SAP Connector for Azure Sentinel



Cybersecurity: 1-Week Assessment



Data Estate Assessment: 2-Week Assessment



DataNeuron: Automated Learning Platform


Data Science & AI: 1-Week Assessment

Digia Cloud Cost Management Reporting



DLO Starter



e4Integrate



eMission Cloud View: 10-Week Assessment



eZuite Cloud ERP



HPC Cluster – CPU Based Cluster on SUSE Enterprise Linux 15.3 HPC



iEduERP



iFIX Intelligent Service Automation



impress.ai



Intelligent Document Processor



iPILOT Teams Direct Routing



IQ3 Cloud – Azure Managed Cloud



Journey to Cloud: 1-Hour Briefing



KIVU Expense



KPN Outbound Email Security Solution



Logicworks Managed Services for Azure



Loopr Data Labelling



Mammography Intelligent Assessment



MIND’s SAP on Azure: 1-Day Briefing



Minimum Viable Cloud: 6-Week Assessment



MT Cloud Control Volumes



Networking Services for Cloud: 1-Hour Briefing



OneDrop



Oracle Database



Orbital Insight Defense and Intelligence



PwC Promo and Assortment Management Tool (RGS)



Qlik Forts



Quality Management Software System



SailPoint Sentinel Integration



Sarus Private Learning



Scale by indigo.ai



Sentiment Analysis (Call Centre) by BiTQ



SINVAD



Thunder Threat Protection System Virtual Appliance for DDoS Protection



Urbana IoT Platform



Versa SASE in vWAN



Wipro Smart Asset Twin



Introducing Azure AD custom security attributes

Introducing Azure AD custom security attributes

This article is contributed. See the original author and article here.

This public preview of Microsoft Azure Active Directory (Azure AD) custom security attributes and user attributes in ABAC (Attribute Based Access Control) conditions builds on the previous public preview of ABAC conditions for Azure Storage. Azure AD custom security attributes (custom attributes, here after) are key-value pairs that can be defined in Azure AD and assigned to Azure AD objects, such as users, service principals (Enterprise Applications) and Azure managed identities. Using custom attributes, you can add business-specific information, such as the user’s cost center or the business unit that owns an enterprise application, and allow specific users to manage those attributes. User attributes can be used in ABAC conditions in Azure Role Assignments to achieve even more fine-grained access control than resource attributes alone. Azure AD custom security attributes require Azure AD Premium licenses.


 


We created the custom attributes feature based on the feedback we received for managing attributes in Azure AD and ABAC conditions in Azure Role Assignments:



  • In some scenarios, you need to store sensitive information about users in Azure AD, and make sure only authorized users can read or manage this information. For example, store each employee’s job level and allow only specific users in human resources to read and manage the attribute.

  • You need to categorize and report on enterprise applications with attributes such as the business unit or sensitivity level. For example, track each enterprise application based on the business unit that owns the application.

  • You need to improve your security posture by migrating from API access keys and SAS tokens to a centralized and consistent access control (Azure RBAC + ABAC) for your Azure storage resources. API access keys and SAS tokens are not tied to an identity; meaning, anyone who possesses them can access your resources.  To enhance your security posture in a scalable manner, you need user attributes along with resource attributes to manage access to millions of Azure storage blobs with few role assignments.


Let’s take a quick look at how you can manage attributes, use them to filter Azure AD objects, and scale access control in Azure.


 


Step 1: Define attributes in Azure AD


The first step is to create an attribute set, which is a collection of related attributes. For example, you can create an attribute set called “marketing” to refer to the attributes related to the marketing department. The second step is to define the attributes inside the attribute set and the characteristics of the attribute set. For example, only pre-defined values are allowed for an attribute and whether an attribute can be assigned a single value or multiple values. In this example, there are three values for the project attribute—Cascade, Baker, and Skagit—and a user can be assigned only one of the three values. The picture below illustrates the above example.


 


Step 1.png


 


Step 2: Assign attributes to users or enterprise applications


Once attributes are defined, they can be assigned to users, enterprise applications, and Azure managed identities.


 


Step 2.png


 


Once you assign attributes, users or applications can be filtered using attributes. For example, you can query all enterprise applications with a sensitivity level equal to high.


 


Enterprise applications.png


 


Step 3: Delegate attribute management


There are four Azure AD built-in roles that are available to manage attributes.


 


ABAC.png


 


By default, Global Administrators and Global Readers are not able to create, read, or update the attributes. Global Administrators or Privileged Role Administrators need to assign the attribute management roles to other users, or to themselves, to manage attributes. You can assign these four roles at the tenant or attribute set scope. Assigning the roles at tenant scope allows you to delegate the management of all attribute sets. Assigning the roles at the attribute set scope allows you to delegate the management of the specific attribute set. Let me explain with an example.


 


Xia.png


 



  1. Xia is a privileged role administrator; so, Xia assigns herself Attribute Definition Administrator role at the tenant level. This allows her to create attribute sets.

  2. In the engineering department, Alice is responsible for defining attributes and Chandra is responsible for assigning attributes. Xia creates the engineering attribute set, assigns Alice the Attribute Definition Administrator role and Chandra the Attribute Assignment Administrator role for the engineering attribute set; so that Alice and Chandra have the least privilege needed.

  3. In the marketing department, Bob is responsible for defining and assigning attributes. Xia creates the marketing attribute set and assigns the Attribute Definition Administrator and Attribute Assignment Administrator roles to Bob.


 


Step 4: Achieve fine-grained access control with fewer Azure role assignments


Let’s build on our fictional example from the previous blog post on ABAC conditions in Azure Role Assignments. Bob is an Azure subscription owner for the sales team at Contoso Corporation, a home improvement chain that sells items across lighting, appliances, and thousands of other categories. Daily sales reports across these categories are stored in an Azure storage container for that day (2021-03-24, for example); so, the central finance team members can more easily access the reports. Charlie is the sales manager for the lighting category and needs to be able to read the sales reports for the lighting category in any storage container, but not other categories.


 


With resource attributes (for example, blob index tags) alone, Bob needs to create one role assignment for Charlie and add a condition to restrict read access to blobs with a blob index tag “category = lighting”. Bob needs to create as many role assignments as there are users like Charlie. With user attributes along with resource attributes, Bob can create one role assignment, with all users in an Azure AD group, and add an ABAC condition that requires a user’s category attribute value to match the blob’s category tag value. Xia, Azure AD Admin, creates an attribute set “contosocentralfinance” and assigns Bob the Azure AD Attribute Definition Administrator and Attribute Assignment Administrator roles for the attribute set; giving Bob the least privilege he needs to do his job. The picture below illustrates the scenario.


 


RBAC.png


 


 


Bob writes the following condition in ABAC condition builder using user and resource attributes:


 


Role assignment condition.png


 


To summarize, user attributes, resource attributes, and ABAC conditions allow you to manage access to millions of Azure storage blobs with as few as one role assignment!


 


Auditing and tools


Since attributes can contain sensitive information and allow or deny access, activity related to defining, assigning, and unassigning attributes is recorded in Azure AD Audit logs. You can use PowerShell or Microsoft Graph APIs in addition to the portal to manage and automate tasks related to attributes. You can use Azure CLI, PowerShell, or Azure Resource Manager templates and Azure REST APIs to manage ABAC conditions in Azure Role Assignments.


 


Resources


We have several examples with sample conditions to help you get started. The Contoso corporation example demonstrates how ABAC conditions can scale access control for scenarios related to Azure storage blobs. You can read the Azure AD docs, how-to’s, and troubleshooting guides to get started.


 


We look forward to hearing your feedback on Azure AD custom security attributes and ABAC conditions for Azure storage. Stay tuned to this blog to learn about how you can use custom security attributes in Azure AD Conditional Access. We welcome your input and ideas for future scenarios.


 


 


 


Learn more about Microsoft identity:


New Microsoft Teams Essentials is built for small businesses

New Microsoft Teams Essentials is built for small businesses

This article is contributed. See the original author and article here.

Perhaps no one has been hit harder over the past 20 months than small businesses. To adapt and thrive in this new normal, small businesses need comprehensive solutions that are designed specifically for them and their unique needs.

The post New Microsoft Teams Essentials is built for small businesses appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Ignite 2021 – The do not miss list for app developers

Ignite 2021 – The do not miss list for app developers

This article is contributed. See the original author and article here.

Microsoft Ignite 2021 took place online November 2-5. This fall edition was full of dev news, and if you don’t want to miss anything related to App development and innovation, keep reading!


 


Session Highlights


 


What does it take to build the next innovative app?


 


During the Digital and App Innovation Into Focus session, Ashmi Chokshi and Developer and IT guests Amanda Silver, Donovan Brown and Rick Claus discussed processes and strategies to help deliver innovative capabilities faster.


 


Developers are driving innovation everywhere, and Ashmi started the conversation strong sharing how she sees the opportunity to drive impact. Then, Donovan presented his definition of cloud native, the benefits of a microservices architecture, and engaged in a discussion with Rick around DevOps and Chaos Engineering. This session also discussed how to transform and modernize your existing .NET and Java applications. Amanda Silver concluded with a demo showing how tools like GitHub Actions, Codespaces and Playwright can help with development, testing and CI/CD, no matter what language and framework you are using. 


 


The sketch below illustrates the cloud native and DevOps segment showcasing the new public preview of Azure Container apps, a fully managed serverless container service built for microservices that scales dynamically based on HTTP traffic, events or long-running background jobs.


 


thumbnail_image001.png


 


To dive deeper into the latest innovation on containers and serverless to create microservices application on Azure, don’t miss Jeff Hollan and Phil Gibson’s session where they demoed Azure Container apps and the Open Service Mesh (OSM) add-on for Azure Kubernetes Service, a lightweight and extensible cloud native open-source service mesh built on the CNCF Envoy project. Brendan Burns, Microsoft CVP of Azure Compute, also shared his views on how Microsoft empowers developers to innovate with cloud-native and open source on Azure in this blog.


 


Another highlight of Ignite was the Build secure app with collaborative DevSecOps practice session, followed by the Ask-the-expert, where Jessica Deen and Lavanya Kasarabada introduced a complete development solution that enables development teams to securely deliver cloud-native apps at DevOps speed with deep integrations between GitHub and Azure.


 


Announcements recap


 


In addition to Azure Container apps and Open Service Mesh add-on for AKS, we also announced new functionalities for Azure Communication Services, API management, Logic Apps, Azure Web PubSub, Java on Azure container platforms and DevOps.


 



  • Azure Communication services announced two upcoming improvements designed to enhance customer experiences across multiple platforms: Azure Communication services interoperability into Microsoft Teams for anonymous meeting join, generally available in early December; and short code functionality for SMS in preview later this month.


 



  • Regarding Azure Logic Apps, updated preview capability and general availability to Logic Apps standard features have been made available for SQL as storage provider, Managed identity, Automation tasks, Designer, Consumption to standard export and connectors.


 



 



 



 


The complete line up of Azure Application development sessions and blogs is listed below:


 


On-demand sessions:


 










































Innovate anywhere from multicloud to edge with Scott Guthrie



Microsoft Into Focus: Digital & App Innovation with Amanda Silver, Donovan Brown, Ashmi Chokshi, Rick Claus, Ben Walters and Adam Yager



Innovate with cloud-native apps and open source on Azure with Phil Gibson and Jeff Hollan



Build secure apps with collaborative DevSecOps practices with Jessica Deen and Lavanya Kasarabada


And Ask-the Experts session



Deep Dive on new container hosting options on Azure App Service and App Service Environment v3 with Stefan Schackow



Modernize enterprise Java applications and messaging with Java EE/Jakarta EE on Azure and Azure Service Bus with Edward Burns



Updates on Migrating to Azure App Service with Rahul Gupta, Kristina Halfdane, Gaurav Seth



Scaling Unreal Engine in Azure with Pixel Streaming and Integrating Azure Digital Twins with Steve Busby, Erik Jansen, Maurizio Sciglio, Aaron Sternberg, David Weir-McCall



Enterprise Integration: Success Stories & Best Practices with Derek Li



Build a basic cloud-native service using PostgreSQL and Node.js with Scott Coulton, Glaucia Lemos



Programming Essentials for Beginners with Cecil Phillip



Low Code, No Code, No Problem – A Beginner’s Guide to Power Platform. with Chloe Condon



 


Blog-posts:


 













































Your hybrid, multicloud, and edge strategy just got better with Azure by Kathleen Mitford



Innovate with cloud-native apps and open source on Azure by Brendan Burns



Introducing Azure Container Apps: a serverless container service for running modern apps at scale by Daria Grigoriu



Announcing Public Preview of the Open Service Mesh (OSM) AKS add-on by Phil Gibson



Ignite 2021: New releases for Azure Communication Services designed to enhance customer experiences by Kristin Dunning



Power Apps – Pay-as-you-go Model, Standalone Mobile App Packages & Azure Integration by Zachary Cavanell



Build secure apps on hardened dev environments with secure DevOps workflows by Samit Jhaveri



Announcing the Public preview of Azure Chaos Studio by John Engel-Kemnetz



Putting tools in your hands to improve developer productivity by Alison Yu



What’s new in Azure App Service – Fall Ignite 2021 Edition!



Run Oracle Weblogic Server on Azure Kubernetes Service by Reza Rahman



Run IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift and Azure Kubernetes Service by Reza Rahman



Announcing Unreal Pixel Streaming in Azure by James Gwertzman



 


Additional learn resources: 



  • Each session has a curated Microsoft learn collection with learn modules and paths, e-books recommendations, related blog posts etc. Here is the collection for Digital and App Innovation IntoFocus session: https://aka.ms/intofocus-digital-apps

  • We also just released the new 2021 edition of the Developer’s Guide to Azure, free for you to download! 


 


We can’t wait to see what you create!

Stretching the IoT Edge performance limits

Stretching the IoT Edge performance limits

This article is contributed. See the original author and article here.

I had a customer streaming messages at a high-rate (up to 2000 msg/s – 1KB each) from a protocol translator running on an x86 industrial PC to a cloud-based Mosquito MQTT broker. 


That edge device evolved quickly into a more capable and secure intelligent edge solution thanks to Azure IoT Edge and Azure IoT Hub, adding device provisioning (secured with an HW-based identity) and device management capabilities on top of a bi-directional communication, along with the deployment, execution, and monitoring of other edge workloads in addition to a containerized version of the original MQTT protocol translator.


The performance requirement did not change though: the protocol translator (running now as an Azure IoT Edge module) still had to ingest and deliver to the IoT Hub up to 2000 msg/s (1KB each), with a minimum latency.


 


intro.png


 


Is it feasible? Can an IoT Edge solution stream 2000 msg/s or even higher rates? What’s the upper limit? How to minimize the latency?


This blog post will guide you through a detailed analysis of the pitfalls and bottlenecks when dealing with high-rate streams, to eventually show you how to optimize your IoT Edge solution and meet and exceed your performance requirements in terms of rate, throughput, and latency.


 


Topics covered:



 


The message queue


The Azure IoT Edge runtime includes a module named edgeHub, acting as a local proxy for the IoT Hub and as a message broker for the connected devices and modules.


The edgeHub supports extended offline operation: if the connection to the IoT Hub is lost, edgeHub saves messages and twin updates in a local message queue (aka “store&forward” queue). Once the connection is re-established, it synchronizes all the data with the IoT Hub.


The environment variable “UsePersistentStorage” controls whether the message queue is:



  • stored in-memory (UsePersistentStorage=false)

  • persisted on disk (UsePersistentStorage=true, which is the default


message-queue-1.png


 


message-queue-2.png


 


When persisted on disk (default), the location of the queue will be:



  • the path you specified in the edgeHub HostConfig options in the deployment manifest as per here

  • …or in the Docker’s OVERLAY folder if you didn’t do any explicit bind, which is:
    /var/lib/docker/overlay2


The size of the queue is not capped, and it will grow as long as the device has storage capacity.


When dealing with a high message rate over a long period, the queue size could easily exceed the available storage capacity and cause the OS crash.


 


How to prevent the OS crash?


Binding the edgeHub folder to a dedicated partition, or even a dedicated disk if available, would protect the OS from uncontrolled growth of the edgeHub queue.


 


os-crash-1.png


 


If the “DATA” partition (or disk) runs out of space:



  • the OS won’t crash…

  • …but edgeHub container will crash anyways!


 


How to prevent the edgeHub crash?


Do size the partition for the worst-case or reduce the Time-To-Live (TTL).


I will let you judge what’s the worst case in your scenario. But the very worst case is total disconnection for the entire TTL. During the TTL (which is 7200 s = 2 hrs. by default), the queue will accumulate all the incoming messages at a given rate and size. And be aware that the edgeHub keeps one queue per endpoint and per priority.


An estimation of the queue size on disk would be:


 


edgehub-crash-1.png


 


And if you do the math, a “high” rate of 2000 [msg/s] with 1 [KB/msg] could easily consume almost 15 GBs of storage within 2 hrs.


 


edgehub-crash-2.png


 


But even a “low” 100 [msg/s] rate you could easily consume up to 1GB, which would be an issue on constrained devices with an embedded storage of few GBs.


 


edgehub-crash-3.png


 


Then, to keep the disk consumption under control:



  • the application requirements and what the “worst case” means in your scenario

  • do some simple math to estimate the max size consumed by the edgeHub queue and size the partition/disk accordingly…

  • …and fine-tune the TTL


If you keep the queue disk consumption under control with proper estimation and sizing, you don’t need to bind it to a dedicated partition/disk. But it’s an extra-precaution that comes with almost no effort.


 


Btw: If you considered setting UsePersistentStorage=false to store the queue in memory, you may realize now that the amount of RAM needed would make it an expensive option if compared to disk or non-viable at all. Moreover, such an in-memory store would NOT be resilient to unexpected crashes or reboots (as the “EnableNonPersistentStorageBackup” can backup and restore the in-memory queue only when you go through a graceful shutdown and reboot).


 


edgehub-crash-4.png


 


The clean-up process


What happens to expired messages?


Expired messages are removed every 30 minutes by default, but you can tune that interval using this environment variable MessageCleanupIntervalSecs:


 


clean-up-1.png


 


If you use different priorities (LINK), do set “CheckEntireQueueOnCleanup”=true to force a deep clean-up and make sure that all expired messages are removed, regardless of the priority.


 


clean-up-2.png


 


Why? The edgeHub keeps one queue per endpoint and per priority (but not per TTL)


If you have 2 routes with the same endpoint and priority but different TTL, those messages will be put in the same queue. In that case, it is possible that the messages with different TTLs are interleaved in that queue. When the cleanup processor runs, by default, it checks the messages on the head of each queue to see if they are expired. It does not check the entire queue, to keep the cleanup process as light as possible. If you want it to clean up all messages with an expired TTL, you can set the flag CheckEntireQueueOnCleanup to true.


 


The built-in metrics


Now that you have the disk consumption of your edgeHub queue under control, it’s a good practice to keep it monitored using the edgeAgent and edgeHub built-in metrics and the Azure Monitor integration.


 


built-in-metrics-1.png


 


The “edgeAgent_available_disk_space_bytes” reports the amount of space left on the disk.


…but there’s another metric you should pay attention to, which is counting the number of non-expired messages still in the queue (i.e. not delivered yet to the destination endpoint):



 


That “edgehub_queue_length” is a revelation, and it explains how the latency relates to the rates. But to understand it, we must measure the message rate along the pipeline first.


 


The analysis


How to measure the rates and the queue length


I developed IotEdgePerf, a simple framework including:



  • a transmitter edge module (1), to be deployed on the edge device under test. It will generate a burst of messages and measure the actual output rate (at “A”)

  • an ASA query (2), to measure the IoT Hub ingress rate (at “B”) and the end-2-end latency (“A” to “B”)

  • a console app (3), to coordinate the entire test, to collect the results from the ASA job and show the stats


Further instructions in the IotEdgePerf GitHub repo.


 


how-to-measure-1.png


 


I deployed the IotEdgePerf transmitter module to an IoT Edge 1.2 (instructions here) running on a DS2-v2 VM and connected to a 1xS3 IoT Hub. I launched the test from the console app as follows:


 


dotnet run -- 
  --payload-length=1024 
  --burst-length=50000 
  --target-rate=2000​


 


Here are the results:


how-to-measure-2.png


 



  • actual transmitter output rate (at “A”): 1169 msg/s, against the desired 2000 msg/s

  • IoT Hub ingestion rate: 591 msg/s (at “B”)

  • latency: 42s (“C”)


As anticipated, the “edgehub_queue_length” explains why we have the latency. Let’s have a look at it using Log Analytics:


 


how-to-measure-3.png


 


Let’s correlate the queue length with the transmission burst: as the queue is a FIFO (First-In-First-Out), the last message produced by the transmitter is the last message ingested by the IoT Hub. Looking at “edgehub_queue_length” data, the latency on the last message is 42 seconds.


 


how-to-measure-4.png


 


How does the queue’s growth and degrowth slopes and maximum value relate to the message rate?


 


how-to-measure-5.png



  • first, during the burst transmission, the queue grows with a rate:


arlotito_0-1637609708229.png


which is in line with what you would expect from a queue, where the growth rate:

arlotito_1-1637609708232.png


 


 



  • then, once the transmission is over, the queue decreases with a rate:

    arlotito_2-1637609708233.png


 


The consistency among the different measurements (on the message rate, queue growth/degrowth and latency) proves that the methodology and tools are correct.


 


Minimum latency


Using some simple math, we can express the latency as:


minimum-latency-1.png


arlotito_3-1637609877015.png


 


where N is the number of messages.


If we apply that equation to the numbers we measured, again we can get a perfect match:



minimum-latency-2.png


arlotito_4-1637609932493.png


 


The latency will be minimum when rateOUT = rateIN, i.e. when the upstream rate equals the source output rate, and the queue does not accumulate messages. This is quite an obvious outcome, but now you have a methodology and tools to measure and relate to each other the rates, the latency, and the queue length (and the disk consumption as well).


 


Looking for bottlenecks


Let’s go back to the original goal of a sustained 2000 msg/s rate delivered upstream with minimum latency. We are now able to measure both the source output and the upstream rate, and to tell what’s the performance gap we must fill to assure minimum latency:  



  • the source output rate should increase from 1160 to 2000 msg/s (A)

  • the upstream rate should increase from 591 to 2000 msg/s (B)



bottlenecks-1.png



But… how to fill that gap? What’s the bottleneck? Do I need a faster CPU or more cores? More RAM? Or the networking is the bottleneck? Or are we hitting some throttling limits on the IoT Hub?


 


Scaling UP the hardware


Let’s try with a more “powerful” hardware.


Even if IoT Edge will usually run on a physical HW, let’s use IoT Edge on an Azure VMs, which provide a convenient way of testing different sizes in a repeatable way and compare the results consistently.


I measured the baseline performance of the DSx v2 VMs (general purpose with premium storage) sending 300K messages 1KB each using IotEdgePerf:


































VM SIZE



SPECS (vCPU/RAM/SCORE)



Source


[msg/s]



Upstream


[msg/s]



Standard_DS1_v2



1 vcpu / 3.5 GB / ~20



900 ÷ 1300



500 ÷ 600



Standard_DS2_v2



2 vcpu / 7 GB / ~40



Standard_DS3_v2



4 vcpu / 7 GB / ~75



Standard_DS4_v2



8 vcpu / 14 GB / ~140



Standard_DS5_v2



16 vcpu / 56 GB / ~300



(test conditions: 1xS3 IoT Hub unit, this C# transmitter module, 300K msg, msg size 1KB)


Scaling UP from DS1 to DS5, the source rate increases by around ~50% from a DS1 to DS5… which is peanuts if we consider that a DS5 performs ~15x better (scores here) and costs ~16x (prices here) than a DS1.


Even more interestingly, the upstream rate does not increase, suggesting there’s a weak correlation (or no correlation at all) with the HW specs.


 


Scaling OUT the source


Let’s distribute the source stream across multiple modules in a kind of scaling OUT, and look at the aggregated rate produced by all the modules.


scaling-out-1.png


scaling-out-2.png


The maximum aggregated source is ~1900 msg/s rate (obtained with N=3 modules), which is higher than the rate of a single module (~1260 msg/s), with a gain of ~50%. However, such improvement is not worth it if we consider the higher complexity of distributing the source stream across multiple source modules.


Interestingly, the upstream rate increases from ~600 to ~1436 msg/s. Why?


The edgeHub can either use the AMQP or the MQTT protocol to communicate upstream with the cloud, independently from protocols used by downstream devices. The AMQP protocol provides multiplexing capabilities that allow the edgeHub for combining multiple downstream logical connections (i.e. the many modules) into a single upstream connection, which is more efficient and leads to the rate improvement that we measured. AMQP is indeed the default upstream protocol and the one I used in this test. This also confirms that the upstream rate is mostly determined by the protocol stack overhead.


Many cores with many modules


Scaling UP from a 1 vCPU machine (DS1) to a 16 vCPU (DS5) machine didn’t help when using a single source module. But what if we have multiple source modules? Would many cores bring an advantage?


many-cores-1.png


Yes. Multiple modules mean multiple docker containers and eventually multiple processes, that will run more efficiently on a multi-core machine.


But is the 3.5x boost worth the 16x price increase of a DS5 vs DS1? No, if you consider that the upstream, again, didn’t increase.


 


Increasing the message size: from rate to throughput


Let’s go back to a single module and increase the message size from 1 KB to 32 KB on a DS1v2 (1 vcpu, 3.5GB). Here are the results:


msg-size-1.png


(test conditions: 1xS3 IoT Hub unit, this C# transmitter module)


The rate decreases from 839msg/s@1KB down to 227msg/s@32KB, but the THROUGHPUT increases from ~0.8 MB/s up to ~7.2MB/s. Such behavior suggests that sending big messages and at a lower rate is more efficient!


How to leverage it?


Let’s assume that the big 16KB message is a batch of 16 x 1 KB messages: it means that the ~4.5MB/s throughput would be equivalent to a message rate of ~4500 msg/s of 1KB each.


Then message rate and throughput is not the same thing. If the transport protocol (and networking) is the bottleneck, higher throughputs can be achieved by sending bigger messages at a lower rate.


How do we implement message batching then?


 


Message Batching


We have two options:



  • Application-level batching: the batching is done in the source module, whereas the downstream service extracts the original individual messages. This requires custom logic at both ends.


batching-1.png


 



  • edgeHub built-in batching: the batching is managed by the edgeHub and the IoT Hub automatically and in a transparent way, without the need for any additional code.  
    The ENV variable “MaxUpstreamBatchSize” sets the max number of messages EdgeHub will batch together into a single message up to 256KB.
    batching-2.png


edgeHub built-in batching deep dive


The default is MaxUpstreamBatchSize = 10, meaning that some batching is already happening under the cover, even if you didn’t realize it. The optimal value for MaxUpstreamBatchSize would be 256 KB / size(msg), as you would want to fit as many small messages as you can in the batch message.


How does it work?


Set the edgeHub RuntimeLogLevel to DEBUG and look for lines containing “obtained next batch for endpoint iothub”:


built-in-1.png


Looking at the timestamps, you’ll see that messages are collected and sent upstream in a batch every 20ms, suggesting that:



  • the latency introduced by this built-in batching is negligible (< 20ms)

  • this mechanism is effective only when the input rate is > 1/20ms=50 msg/s


 


Comparison of the batching options


Are the application-level and built-in batching equivalent?


comparison-1.png



On the UPSTREAM side, you have batch messages (i.e. big and low-rate) in both cases. It means that:



  • you pay for the batch size divided by 4KB

  • and the batch messages counts as a single d2c operation


The latter point is quite interesting: as the message batch counts as 1 device-to-cloud operation, the batching helps also reduce the pressure on the d2c throttling limit of the IoT Hub, which is an attention point when dealing with high message rates.


comparison-2.png


On the SOURCE side, the two approaches are very different:



  • application-level batching is sending batch messages (i.e. big, and low-rate)

  • …while the built-in batching is sending the individual high-rate small messages, and potentially still less efficient (think of the IOPS on the disk for instance, and the transport protocol used to publish the messages to the edgeHub broker).


Eventually, we can state that:



  • application-level batching is efficient end-2-end (i.e. source and upstream)…

  • …while the built-in batching is efficient on the upstream only


Let’s test that assumption.


Built-in batching performance


On a DS2v2 VM, the max output rate of the source module is ~1100 [msg/s] (with 1KB messages).


As expected, such source rate does not benefit from higher MaxUpstreamBatchSize values, while the upstream does, and it eventually equals the 1100 [msg/s] source rate (hence no latency).


built-in-perf1.png
built-in-perf2.png


 


Application-level batching performance


The application-level batching is effective on both the source and upstream throughput.


app-level-perf-1.png



On a DS1 (1 vCPU, 3.5GB of RAM, which is the smallest size of the DSv2 family) you can achieve:



  • a sustained ~3600 [KB/s] end-to-end with no latency (i.e. on both source and upstream) with a msg size of 8KB…

  • …while with a msg size of 32KB, you can increase the source throughput up to 7300 [KB/s], but the upstream is capped at around 4200 KB/s. That will cause some latency.


app-level-perf-2.png


app-level-perf-3.png


Is latency always a problem? Not actually. As we saw, the latency is proportional to the number of messages sent (N):


arlotito_5-1637610942127.png


 


When sending a short burst of messages, the latency may be negligible. On the other hand, on short burst you may want to minimize the transmission duration.


As an example, let’s assume you want to send N=1000 messages, of 1 KB each. Depending on the batching, the transmission duration at the source side will be:



  • no batching: transmission duration = 1000 / 872 ~ 1.1s (with no latency)

  • 8KB batching: transmission duration = 1000 / 3648 ~ 0.3s (with no latency)

  • 32KB batching: transmission duration = 1000 / 7328 ~ 0.1s (+0.2s of latency)


Shortening the transmission duration from a 1.1s down to 0.1s could be critical in battery powered applications, or when the device must send some information upon an unexpected loss of power.


 


Azure IoT Device SDKs performance comparison


The performance results discussed in this blog post were obtained using this C# module, which leverages the .NET Azure IoT Device SDK.


How do other SDKs (Python, Node.js, Java, C) perform in terms of maximum message rate and throughput? The performance gap, if any, would be due to:



  • language performance (C is compiled whereas others are interpreted)

  • specific SDK architecture and implementation


As an example of the latter, the JAVA SDK differs from other SDKs as the device/module client embeds a queue and a periodic thread that checks for queued messages. Both the thread period (“receivePeriodInMilliseconds”) and the number of messages de-queued per execution (“SetMaxMessagesSentPerThread”) can be tweaked to increase the output rate.


On the other hand, what is the common denominator to all the SDKs? It’s the transport protocol, which ultimately sets the upper bound of the achievable performance. In this blog post we focused on MQTT, and it would be worth to explore the performance upper bound by using a fast and lightweight MQTT client instead of the SDK. That’s doable and it’s explained here.


A performance comparison among the different Device SDKs as well as a MQTT client will be the topic for a next blog post.


 


Tools


VM provisioning script


A bash script to spin up a VM with Ubuntu Server 18.04 and a fully provisioned Azure IoT Edge ready-to-go: https://github.com/arlotito/vm-iotedge-provision


prov-1.png


IotEdgePerf


A framework and a CLI tool to measure the rate, the throughput and end-to-end latency of an Azure IoT Edge device:


https://github.com/arlotito/IotEdgePerf


iotedgeperf-1.png


iotedgeperf-2.png


iotedgeperf-3.png


Conclusion


This blog post provided a detailed analysis of the pitfalls and bottlenecks when dealing with high-rate streams, and showed you how to optimize your IoT Edge solution to meet and exceed your performance requirements in terms of rate, throughput, and latency.


On an Azure DS1v2 Virtual Machine, we were able to meet and exceed the original performance target of 2000 msg/s (1KB each) and minimum latency, and we achieved a sustained end-2-end throughput of 3600 KB/s with no latency, or up to 7300 msg/s (1KB each) with some latency.


With the methodology and tools discussed in this blog post, you can assess the performance baseline on your specific platform and eventually optimize it using the built-in or application-level message batching.


In a nutshell:



  • to avoid the OS and edgeHub crash:

    • do estimate the maximum queue length and do size the partition/disk accordingly

    • adjust the TTL

    • keep the queue monitored using the built-in metrics



  • measure the baseline performance (rate, throughput, latency) of your platform (using IotEdgePerf) and identify the bottlenecks (source module? Upstream?)

  • if the bottleneck is the upstream, leverage the built-in batching by tuning the MaxUpstreamBatchSize

  • if the bottleneck is the source module, use application level-batching

  • possibly try a different SDK or a low-level MQTT client for maximum performance


Acknowledgements


Special thanks to the Azure IoT Edge team (Venkat Yalla and Varun Puranik), the Industry Solutions team (Simone BanchieriStefano Causarano and Franco Salmoiraghi), the IoT CSU (Vitaliy Slepakov and Michiel van Schaik) and Olivier Bloch for the support and the many inspiring conversations.