Azure Marketplace new offers – December 13, 2022

Azure Marketplace new offers – December 13, 2022

This article is contributed. See the original author and article here.

We continue to expand the Azure Marketplace ecosystem. For this volume, 108 new offers successfully met the onboarding criteria and went live. See details of the new offers below:


 


 


 













































































































































































































































































































































































































Get it now in our marketplace


Applications-AcalvioShadowPlex.png

Acalvio ShadowPlex: ShadowPlex Cyber Deception from Acalvio provides early detection of advanced cybersecurity threats with precision and speed. The AI-driven solution detects deception across on-premises and cloud workloads.


Applications-AlpineLinux316.png

Alpine Linux 3.16: Ntegral has packaged Alpine Linux 3.16, a lightweight security-oriented distribution based on musl, libc, BusyBox, and OpenRC, to enable you to enjoy the principles of simple, small, and secure.


Applications-BizRepSaaS.png

BizRep SaaS: SETRIO’s BizRep delivers a mobile productivity and CRM solution for sales representatives in the medical and pharmaceutical industries. This Azure-based offering provides a mobile app, a web-based interface for managers, and easy integration with ERP systems.


Applications-BlackArchEssentials.png

BlackArch Essentials: Ntegral provides this prepackaged virtual machine that includes BlackArch Linux, an Arch-based distribution intended for advanced penetration testing, security auditing, and other information security tasks.


Applications-CircleLinux86.png

Circle Linux 8.6: This prepackaged virtual machine from Ntegral contains Circle Linux 8.6, a CentOS alternative designed for enterprise cloud environments with workloads including Node.js, web apps, and database systems.


Applications-CloudanixSecurity-CSPMCIEMandCWPP.png

Cloudanix Security – CSPM, CIEM, and CWPP: Cloudanix gives you a single dashboard to unify cloud infrastructure security by using its cloud security posture management (CSPM), CIEM (cloud infrastructure entitlement management), and CWPP (cloud workload protection platform).


Applications-Codeigniter4Framework.png

CodeIgniter 4 Framework: Get this preconfigured virtual machine (VM) from tunnelbiz Studio for a simple, elegant toolkit for creating full-featured web applications using PHP and CodeIgniter 4. This VM includes Rocky Linux 9 with Nginx and PHP 7.


Applications-DataRoomsNext-generationConversationalDataRooms.png

Data Rooms Next-generation Conversational Data Rooms: Dropvault Data Rooms provide secure and encrypted conversation rooms for collaboration and management of sensitive information, backed by custom storage on Microsoft Azure or on-premises


Applications-DataOpsforSnowflake.png

DataOps for Snowflake: DataOps.live delivers a single platform to build, test, and deploy apps and products for all your Snowflake DataOps needs. DataOps provides end-to-end orchestration for a heterogenous data environment.


Applications-Debian11Server.png

Debian 11 Server: Cloud Infrastructure Services has configured this virtual machine with Debian 11, codenamed bullseye. Debian is a popular Linux distribution used as a server for development, cluster systems, storage servers, and more.


Applications-DemandForecasting.png

Demand Forecasting: Co.dx’s Growth Maximizer app uses no-code AutoML powered by Azure machine learning and Azure Databricks to deliver self-service demand forecasting of SKUs at scale while also providing transparency into forecasts.


Applications-DjangoFramework.png

Django Framework: This virtual machine from tunnelbiz Studio provides the Django web framework for Python on Rocky Linux. Django encourages rapid development and clean, pragmatic design while helping developers avoid common security mistakes.


Applications-DrivingLicenseExtractor.png

Driving License Extractor: Nexus Frontier’s image contains Driving License Extractor to run on Azure Virtual Machine for a convenient, AI-powered solution that extracts data from UK driving licenses in real time.


Applications-Drupal.png

Drupal: Data Science Dojo has configured this virtual machine with Drupal on Ubuntu 20.04 LTS. Drupal is an open-source CMS designed to help you build versatile, structured content and dynamic web experiences.


Applications-ExternalUserManager.png

External User Manager: Solutions2Share’s External User Manager helps you better control data and communications in Microsoft Teams through approval workflows and monitoring for guest and external access to your environment.


Applications-IPLocationAPI-CityCountryCurrencyandMaps.png

IP Location API – City, Country, Currency, and Maps: Fastah IP Location is a REST API that lets you easily map IP addresses to useful metadata including country, city, approximate geo-location coordinates, currency, and time zone.


Applications-JuniperParagonAutomationasaService.png

Juniper Paragon Automation as a Service: Marconi Wireless’s Paragon Automation portfolio of cloud-native software apps integrates with Azure Active Directory and automates your network infrastructure planning, monitoring, provisioning, and optimization.


Applications-KubeLiftSolo.png

KubeLift Solo: KubeLift Solo from Polverio delivers a development and test environment on a single-node Kubernetes cluster in an Azure-optimized virtual machine (VM). Ready out of the box, the VM lets you take advantage of Azure’s ability to scale.


Applications-NavyLinux86.png

Navy Linux 8.6: Ntegral has configured and certified this Azure virtual machine image containing Navy Linux 8.6. Navy Linux is an open-source alternative to CentOs that is compatible with Red Hat Enterprise Linux.


Applications-ParrotOSEssentials.png

Parrot OS Essentials: Parrot OS Essentials is a virtual machine image (VMI) packaged by Ntegral. This command-line only VMI includes Parrot OS Linux, a Debian-like distribution designed for penetration testing and security auditing.


Applications-PostgreSQL14onAlpineLinux316.png

PostgreSQL 14 on Alpine Linux 3.16: Ntegral has configured this virtual machine image containing PostgreSQL 14 on Alpine Linux 3.16. PostgreSQL is a free, open-source relational database system built for enterprise-class workloads.


Applications-QuestDB.png

QuestDB: Data Science Dojo has packaged this virtual machine image containing QuestDB on Ubuntu 20.04 LTS. Built specifically for time-series data, QuestDB is an open-source, SQL database that supports parallelized vectorized execution.


Applications-Redis705onAlpineLinux316.png

Redis 7.0.5 on Alpine Linux 3.16: Ntegral has configured this virtual machine image containing Redis 7.05 on Alpine Linux 3.16. A NoSQL database, Redis provides a flexible, in-memory data structure store that facilitates the storage of large amounts of data.


Applications-SecureBox.png

SecureBox: EcoLink Technology’s SecureBox app for Microsoft Teams lets you quickly secure plaintext information by using AES encryption, decrypting the information only when opened.


Applications-SFTP-OpenSSHFTPonSUSEEnterprise12Minimal.png

SFTP – OpenSSH FTP on SUSE Linux Enterprise Server 12 Minimal: Art Group has configured this image containing OpenSSH FTP on SUSE Linux Enterprise Server 12. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPonSUSEEnterprise15Minimal.png

SFTP – OpenSSH FTP on SUSE Linux Enterprise Server 15 Minimal: Art Group has configured this image containing OpenSSH FTP on SUSE Linux Enterprise Server 15. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronCentOSLinux79Minimal.png

SFTP – OpenSSH FTP Server on CentOS Linux 7.9 Minimal: Art Group has configured this image containing OpenSSH FTP on CentOS 7.9 Linux. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronCentOSStreamLinux8Minimal.png

SFTP – OpenSSH FTP Server on CentOS Stream Linux 8 Minimal: Art Group has configured this image containing OpenSSH FTP on CentOS Stream 8 Linux. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronDebian10Minimal.png

SFTP – OpenSSH FTP Server on Debian 10 Minimal: Art Group has configured this image containing OpenSSH FTP on Debian 10 Linux. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronRedHatEnterpriseLinux86Minimal.png

SFTP – OpenSSH FTP Server on Red Hat Enterprise Linux 8.6 Minimal: Art Group has configured this image containing OpenSSH FTP on Red Hat Enterprise Linux 8.6. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronUbuntu1804LTSMinimal.png

SFTP – OpenSSH FTP Server on Ubuntu 18.04 LTS Minimal: Art Group has configured this image containing OpenSSH FTP on Ubuntu 18.04 LTS. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronUbuntu2004LTSMinimal.png

SFTP – OpenSSH FTP Server on Ubuntu 20.04 LTS Minimal: Art Group has configured this image containing OpenSSH FTP on Ubuntu 20.04 LTS. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTP-OpenSSHFTPServeronUbuntu2204LTSMinimal.png

SFTP – OpenSSH FTP Server on Ubuntu 22.04 LTS Minimal: Art Group has configured this image containing OpenSSH FTP on Ubuntu 22.04 LTS. OpenSSH FTP is a secure FTP server that provides high levels of access control and encrypted file transmission.


Applications-SFTPGo-SFTPHTTPSFTPStoAzureBlobStorage.png

SFTPGo – SFTP, HTTP/S, FTP/S to Azure Blob Storage: Built by the SFTPGo project, this virtual machine includes SFTPGo configured to use SQLite. SFTPGo supports SFTP and FTP/S services, including web file browsing and file sharing, backed by Azure Blob Storage.


Applications-TPPValidationandConfirmationPSD2.png

TPP Validation & Confirmation (PSD2): Mobilitas Sweden AB offers an Azure-based service to help account servicing payment service providers manage their PSD2 regulatory processes in an automated, safe, and cost-effective manner.


Applications-Ubuntu2204LTS-CISBenchmarkLevel1.png

Ubuntu 22.04 LTS – CIS Benchmark Level 1: Center for Internet Security (CIS) has configured this image of Ubuntu 22.04 LTS to meet CIS’s Level 1 benchmark, a vendor-agnostic, consensus-based configuration designed to comply with numerous cybersecurity frameworks.


Applications-WireguardServeronCentOSStreamLinux8Minimal.png

WireGuard Server on CentOS Stream Linux 8 Minimal: Configured by Art Group, this virtual machine image contains WireGuard on CentOS Stream 8 Linux. WireGuard is a compact, easy-to-configure, fast, and secure VPN server.


Applications-ZenCRMBusinessLine-SalesEdition.png

ZenCRM Business Line – Sales Edition: InterZen’s ZenCRM Sales Edition provides an Azure-based, sales-focused CRM solution that uses the lead-to-cash process. ZenCRM Sales Edition lets you manage contents, generate business proposals from templates, and more.



Go further with workshops, proofs of concept, and implementations


ConsultingServices-NETAppService8-WeekImplementation.png

.NET App Service: 8-Week Implementation: Groove Technology will identify your .NET development challenges, create a functional specification document for integration of Azure services, and develop the solution to meet your desired outcomes.


ConsultingServices-AIWorkshop3-DayProofofConcept.png

AI Workshop: 3-Day Proof of Concept: Coppei Partners will work with your functional and technical leaders to understand and define high-impact AI use cases specific to your business, then build a targeted proof of concept by using Azure Cognitive Services and Azure ML Studio.


ConsultingServices-AppandDatabaseModernization3-DayWorkshop.png

App and Database Modernization: 3-Day Workshop: In this workshop, Plain Concepts will help you identify the resources required to migrate and modernize your different workloads to Microsoft Azure. This workshop aligns to the Microsoft Cloud Adoption Framework for Azure.


ConsultingServices-AppModernizationKick-StartandProofofConcept.png

App Modernization Kick-Start and Proof of Concept: Innofactor Finland will review your current processes and identify the value of modernizing your business apps by using low-code development and Microsoft Azure services. This offer includes a cost estimate for Azure consumption.


ConsultingServices-AztekAppandDB4-WeekMigration.png

Aztek App and DB: 4-Week Migration: Aztek, part of R-Net D B LTD., will enable you to enjoy faster processing, simpler management, and flexible configurations for your apps and database migrations to Microsoft Azure. You’ll also get post-migration training and optimization.


ConsultingServices-AzureLandingZone1-WeekImplementation.png

Azure Landing Zone: 1-Week Implementation: Click2Cloud will implement a Microsoft Azure landing zone aligned with the Microsoft Azure Cloud Adoption Framework. The solution will utilize the latest Microsoft best practices to align with your business requirements.


ConsultingServices-AzureMLOpsPrepackagedPipeline4-WeekImplementation.png

Azure MLOps Prepackaged Pipeline: 4-Week Implementation: Orange Business Services will deliver a solution to retrain and deploy your machine learning models while guaranteeing test coverage, continuous integration, and model monitoring by using Azure machine learning operations (MLOps).


ConsultingServices-AzureRedHatOpenShiftAppDevSprint1-WeekImplementation.png

Azure Red Hat OpenShift App Dev Sprint: 1-Week Implementation: Red Hat will help you accelerate your app migration and modernization to Microsoft Azure Red Hat OpenShift, a turnkey Kubernetes solution that provides high availability container platform as a service to improve your DevOps experience.


ConsultingServices-BackupasaServicewithMicrosoftAzureandCohesity5-DayWorkshop.png

Backup as a Service with Microsoft Azure and Cohesity: 5-Day Workshop: eGroup will focus on hybrid cloud governance and security, including a design for data protection built on Microsoft Azure Backup and Cohesity. You’ll also receive a technical analysis and a total cost of ownership report.


ConsultingServices-CloudDatabaseManagementServices3-WeekImplementation.png

Cloud Database Management Services: 3-Week Implementation: NCS will implement its Cloud Database as a Service portfolio to support your organization through monitoring, maintenance, performance management, backups, and more for databases hosted on Microsoft Azure.


ConsultingServices-CloudRichesEDAonAzure:1-MonthProofofConcept.png

CloudRiches EDA on Azure: 1-Month Proof of Concept: Available only in Chinese, this proof of concept from CloudRiches will show you how migrating EDA tools to Microsoft Azure will benefit your IC businesses by providing the flexibility to scale resources up or down.


ConsultingServices-DevOps.png

DevOps: U-BTech offers consulting and adoption advice for DevOps processes on Microsoft Azure, including expert advice to manage, optimize, and secure your deployments for improved go-to market capabilities.


ConsultingServices-DevSecOpsAccelerator2-WeekImplementation.png

DevSecOps Accelerator: 2-Week Implementation: Architech will implement its DevSecOps Accelerator to help you go to market faster with end-to-end capabilities on Microsoft Azure and GitHub, including CI/CD pipelines, security as code, integration pipelines, and more.


ConsultingServices-IntegratedSecurityMonitoringServiceforMicrosoft365E3E5.png

Integrated Security Monitoring Service for Microsoft 365 E3 / E5: Available only in Japanese, this monitoring service from PSC Security uses Microsoft Sentinel and Microsoft 365 E3 or E5 for security information and event management. This offer is available as an initial proof of concept.


ConsultingServices-LandingZoneAccelerator2-WeekImplementation.png

Landing Zone Accelerator: 2-Week Implementation: Architech Solutions will implement its Landing Zone Accelerator on Microsoft Azure to enable your business to scale infrastructure quickly with SOC2 compliance for demanding markets.


ConsultingServices-NutanixClustersNC2onAzure1-MonthProofofConcept.png

Nutanix Clusters (NC2) on Azure: 1-Month Proof of Concept: eGroup will simplify your adoption of Nutanix Clusters (NC2) on Microsoft Azure, speeding your migration to hybrid, multi-cloud environments on the public cloud and enhancing your business resilience.


ConsultingServices-QuickbookstoAzureMigration5-DayImplementation.png

QuickBooks to Azure Migration: 5-Day Implementation: EFC Data will enable your business to grow faster by migrating your QuickBooks implementation to a well-architected solution built on Microsoft Azure for increased reliability, security, and backup protection.


ConsultingServices-SOCasaServiceImplementationandOperation.png

SOC as a Service: Implementation and Operation: Available in Polish from Chmura Krajowa, this security operations center (SOC) as a service is based on Microsoft Azure Sentinel and includes around-the-clock monitoring and support.


ConsultingServices-Sustainability3-WeekProofofConcept.png

Sustainability: 3-Week Proof of Concept: Available in Spanish, Rico’s Sustainability Dashboard highlights sustainability scores for your company. This proof of concept also includes a Sustainability Roadmap, breaking down how to achieve outcomes by utilizing Microsoft Azure.


ConsultingServices-Windows3652-DayProofofConcept.png

Windows 365: 2-Day Proof of Concept: ProServeIT will guide you through configuring and integrating Windows 365 into your Microsoft Azure environment, including deployment and management by using Microsoft Endpoint Manager.



Contact our partners



AMG – Adoption 365



ARVI – Aviation Weather Service



Ataccama ONE Data Quality



Ataccama ONE Master Data Management (MDM)



Azure Cloud: 2-Week Assessment + Migration



Azure Full-Stack Delivery for Business Innovation: Migration



Azure IoT Migration: Assessment



Azure Migration & Modernization: Briefing



Bank Integration



Bodhee Dynamic Production Scheduler



Candor Protect – Artificial Intelligence (AI) Penetration Testing as a Service



ClearVisibility



Cloud Managed Backup of Microsoft 365



CRAG NaaS



Cyber Defense Center (CDC) Security Operations Center



Data Platform in 30 Days



DeviceOn ePaper



Digital Customer Lifecycle Manager



ERP for Manufacturing



FranklyAI



Global Buyer/Supplier List API



Global Buyer/Supplier Profile API



Hybrid Cloud Observability



Identity Management Solution



Infor SunSystems Connector for Microsoft Power BI



Integrate ADP with Active Directory: 1-Week Implementation



Landed Cost Calculator API



LSP Services



Managed XDR Service



Microsoft Cloud for Financial Services: 1-Week Briefing



NetDocuments Connector for DocuSign SaaS Solution



Networking for Azure: 1-Day Assessment



Payara Cloud



PKI as a Service



Portfolio Carbon Navigator



Postgres Pro Standard Database 15 (Container)



SCIM and HR Onboarding Bridge for Azure Active Directory



SIOS DataKeeper



SIOS LifeKeeper



SUSE Linux Enterprise Server 15 SP4 for SAP – Hardened BYOS



Taxi Activity Automation



Tetrate Service Bridge (TSB)



The Omni-Channel Consumer Skincare Insight Solution



Transportame



Trend Micro Deep Security for Microsoft Sentinel



Trend Micro TippingPoint for Microsoft Sentinel



Trend Micro Vision One for Microsoft Sentinel



Volo Azure Landing Zone



Well-Architected Cloud Environment: 2-Week Trial



Zelly Web Hosting


CISA Releases Three Industrial Control Systems Advisories

This article is contributed. See the original author and article here.

CISA has released three (3) Industrial Control Systems (ICS) advisories on December 13, 2022. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations:

•    ICSA-22-347-01 ICONICS and Mitsubishi Electric Products
•    ICSA-22-347-02 Schneider Electric APC Easy UPS Online
•    ICSA-22-347-03 Contec CONPROSSYS HMI System (CHS)

Drive data architecture modernization with Denodo Platform on Azure Marketplace

This article is contributed. See the original author and article here.

In this guest blog post, Mitesh Shah, Director – Cloud Product Management at Denodo, discusses how Denodo Platform offerings via Microsoft Azure Marketplace can help modernize your cloud data architecture and accelerate cloud adoption.


 


Businesses that are modernizing data warehouses or data lakes using Microsoft Azure Synapse Analytics want simple, governed, more flexible hybrid cloud data integration. This is possible via a virtual data access/data delivery architecture powered by data virtualization, enabling real-time analytics with minimal data movement and zero data replication. Data virtualization is a modern, logical form of data integration; other approaches include extract, transform, and load (ETL) processes and change data capture (CDC).



No matter the size of the business, having a single source of truth is invaluable. Businesses have numerous tools and techniques at their disposal, making it more difficult to establish the right combination when it comes to the overall cost, time to implement, and skills required to establish a modern, digitally driven architecture as part of their cloud transformation journey.


 


The data virtualization approach integrates any kind of structured and unstructured data that is outside the data lake (Azure Synapse), across other cloud systems (in a multi-cloud deployment), or in a different geography to support compliance requirements in real time. This also eliminates the need for multiple tools to manage a variety of cloud services separately and provides the auto-scaling support to manage reporting and data integration workloads across a variety of data sources in a hybrid or multi-cloud environment.


 


Use cases for the data virtualization approach


Users normally think of their cloud migrations in terms of moving their data to the cloud. Yes, that’s an important aspect of any migration, and several bulk load tools can facilitate that. The challenge is minimizing any kind of business disruption to the numerous applications (business intelligence tools, application programming interface connectors, web services) that leverage the data sources that are migrating to the cloud, such as data warehouses or databases. Data virtualization can simplify migrations by introducing a virtual data access layer, which can not only minimize business disruption but also provide flexibility to organizations, enabling them to manage migrations at their own pace.



Here are some other example scenarios:



  • Customer 360/single view of the data: Data virtualization can help by integrating the data from newer applications as part of merger and acquisitions activity. Organizations can combine all data across Salesforce, Microsoft Dynamics 365, and more.

  • Marketing and website analytics: Combine/consolidate all marketing data (from Facebook, Twitter, Salesforce, Google Ads, LinkedIn) in one place, for better business insights.

  • Business intelligence/reporting: Data virtualization makes analytics more effective by integrating the right data sets across multiple data repositories (Azure Synapse, Azure Data Lake Storage, Azure SQL, Azure Cosmos DB).

  • Sales and social media analytics: Data virtualization enables seamless Dynamics 365 CRM integration with a variety of other applications, such as ServiceNow, Google Analytics, Twitter data, and Azure Data Lake Storage, composed of CSV/XLS/Parquet files to drive intelligent campaigns and empower sales cycles with faster conversions and closings.

  • Data marketplace and data as a service (DaaS or API management): Adopt a logical data architecture, which can help democratize all enterprise data while providing centralized control in a distributed data landscape. The web-based data catalog tool provides a single point for secure, enterprise-wide data access and governance. This facilitates the corporate data marketplace, which then provides visibility into the enterprise data ecosystem and enables data to be shared without compromising data security.


Common challenges to keep in mind
Businesses face a variety of challenges that could have a direct impact on their growth and investment:



  • IT resource constraints/cost management: Not having a full-fledged IT team is quite common, and this is especially challenging when organizations are trying to enable self-serviceability. Managing a tight budget in a competitive environment is equally challenging.

  • Siloed/disparate data: This is a common challenge, and while there are multiple ways to address it, such as building ETL pipelines and/or replicating data, many can result in high costs, latency, and other secondary challenges.

  • Data security/compliance in the cloud: Without a centralized security layer, enabling such capabilities as Active Directory integration, data masking, data encryption, and row-and-column-based authorization to support General Data Protection Regulation (GDPR) and other regulations can be a huge burden.


A self-service option in Azure Marketplace: Denodo Platform
Powered by data virtualization, the Denodo Platform addresses the key challenges above. It enhances data management in the cloud and streamlines many elements of this potentially complex journey.


 


Denodo recently announced two new offers, Denodo Professional and Denodo Standard, in Microsoft Azure Marketplace. They are targeted to mid-market users, to help them accelerate their SaaS integration and cloud adoption at an affordable cost using a pay-as-you-go model. Enterprise users can use Denodo Enterprise and Denodo Enterprise Plus offers to access advanced capabilities such as a full-featured data catalog and artificial intelligence/machine learning-powered query optimization for faster, more efficient BI reporting in the cloud.



Denodo is excited to offer a 30-day free trial of Denodo Professional via Azure Marketplace. After that, you pay only $6.27/hour for your data integration and management needs. Sign up to get in touch with our team about the offering, or you can start the free trial subscription directly from the marketplace. You can also take advantage of annual pricing via a private offer. All Denodo Platform offerings in Azure Marketplace are co-sell ready, and they enable you to start you cloud data integration journey in a self-service manner.

Azure Integration Services for Mainframes and Midranges Modernization Partners Survey

Azure Integration Services for Mainframes and Midranges Modernization Partners Survey

This article is contributed. See the original author and article here.

As we prepare the Roadmap for Host Integration Server, our Mainframe and Midranges Integration platform and its Azure Logic Apps Connectors, the Azure Integration Services Product Group, is interested on learning how we can assist you supporting your efforts in your Mainframe and Midranges Modernization to the Azure Cloud. The following is the link to the survey: https://aka.ms/hostintegrationpartners.  


 


hcamposu_0-1670719100579.png


 


 

Web application routing, Open service mesh and AKS

Web application routing, Open service mesh and AKS

This article is contributed. See the original author and article here.

 


 


AKS Web Application Routing with Open Service Mesh


 


AKS product team announced a public preview of Web Application Routing this year.  One of the benefits of using this add-on is the simplicity of adding entry point for applications to your cluster with a managed ingress controller.  This add-on works nicely with Open service mesh. In this blog, we investigate how this works, how to setup mTLS from ingress controller to OSM and the integration.   While we are using AKS managed add-on, we are taking the open-source OSM approach for explaining this, but it’s important to remember that AKS also has an add-on for OSM.


 


Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview) – Azure Kubernetes Service | Microsoft Learn


 


Reference link above focuses on step-by-step process to implement Web application routing along with few other add-ons such as OSM and Azure Keyvault secrets provider.  The intention of this blog is not to repeat same instructions but an attempt to dig into few important aspects of OSM to illustrate connectivity from this managed ingress add-on to OSM.  Enterprises prefer to leverage managed services and add-ons but at the same time there is a vested interest in understanding foundational building blocks of open-source technologies used and how they are glued together to implement certain functionalities.  This blog attempts to provide some insight into how these two (OSM and web app routing) are working together but not drill too much into OSM as its documented well in openservicemesh.io


 


First step is creating a new cluster:


az aks create -g webapprg -n webappaks -l centralus –enable-addons web_application_routing –generate-ssh-keys


This creates a cluster along with ingress controller installed.   You can check this in ingressProfile of the cluster.


srinman_0-1670632496506.png


 


 


srinman_1-1670632496511.png


 


 


Ingress controller is deployed in a namespace called app-routing-system.  Image is pulled from mcr registry (and not other public registries).  Since this creates ingress controller, it creates public IP attached to Azure Load Balancer and used for Ingress controller.  You might want to change ‘Inbound security rules’ in NSG for agentpool to your own IP address (from default Internet) to protect.   


This managed add-on creates an ‘Ingress controller’ with ingress class ‘webapprouting.kubernetes.azure.com’.  So, any ingress definition should use this Ingress class.


srinman_2-1670632496518.png


 


srinman_3-1670632496530.png


 


You can see that Nginx deployment is running with HPA config.  Please understand that this is a reverse proxy, sits in data path, uses resources such as CPU+memory and lots of network I/O so it makes perfect sense to set HPA.  In other words, this is the place where all traffic enters the cluster and traverses through to application pods.  Some refer to this as north-south traffic into the cluster.  It’s important to emphasize that there were several instances in my experience where customers use OSS nginx and didn’t set right config for this deployment, ran into unpredictable failures while moving into production.  Obviously, this wouldn’t show up in functional testing!  So, use this managed add-on where AKS manages it for you and maintains it with more appropriate config. You don’t need to and shouldn’t change anything in app-routing-system namespace.  As stated above, we are taking under the hood approach to understand the implementation and not to change anything here.


 


srinman_0-1670633542893.png


 


 


In this diagram, app container is a small circle and sidecar (envoy) is a larger circle.  Using larger circle for sidecar for more space to show relevant text, so there is no significance with the sizing of the circle/eclipse!  Top left side of the diagram is a copy of a diagram from openservicemesh.io site to explain the relationship between different components in OSM.  One thing to note here is that there is a single service certificate for all K8S pods belonging to a particular service where there is a proxy certificate for each pod.  You will understand this much better later in this blog.


 


At this time, we have deployed a cluster with managed ingress controller (indicated by A in diagram).  It’s time to deploy service mesh.  Again, I’m reiterating that we are taking open source OSM installation approach to walk you through this illustration, but OSM is also an another supported AKS add-on.


 


Let’s hydrate this cluster with OSM.  OSM installation requires osm CLI binaries installed in your laptop (Windows or Linux or Mac).  Link below.


Setup OSM | Open Service Mesh


 


Assuming that your context is still pointing to this newly deployed cluster, run this following command.


osm install –mesh-name osm –osm-namespace osm-system –set=osm.enablePermissiveTrafficPolicy=true


srinman_5-1670632496560.png


 


This completes installation of OSM (ref: B in diagram) with permissive traffic policy which means there are no traffic restrictions between services in the cluster.



Here is a snapshot of namespaces.


srinman_6-1670632496562.png


 


 


List of objects in osm-system namespace. It’s important to ensure that all deployed services are operational.  In some cases, if a cluster is deployed with nodes with limited cpu/mem, this could cause issues to deployment. Otherwise, there shouldn’t be any other issues.


srinman_7-1670632496567.png


 


 


At this time, we’ve successfully deployed ingress controller (ref: A) and service mesh (ref: B). 


 


However, there are no namespaces in the service mesh.  In the diagram above, assume dotted-red rectangle without anything in that box. 


 


Let’s create new namespaces in the cluster and add them to OSM.


srinman_8-1670632496570.png


 


One thing to notice from osm namespace list output is that the status of sidecar-injection.  Sidecar-injection uses Kubernetes mutating admission webhook to inject ‘envoy’ sidecar into the pod definition before it is written to etcd.  It also injects another init container into the pod definition which we will review later.


 


Also create sample2 and add this to OSM. Commands below.


k create ns sample2


osm namespace add sample2


 


 


Deploy sample1 (deploy-sample1.yaml) application with 3 replicas.   This uses ‘default’ service account and creates a service with Cluster IP.  This is a simple hello world deployment as found in Azure documentation.  If you want to test, you can clone code from git@github.com:srinman/webapproutingwithosm.git


 


srinman_9-1670632496575.png


 


 


Let’s inspect service account for Nginx (our Web app routing add-on in app-routing-system namespace).  


 


As you can see, in app-routing-namspace, nginx is using nginx service account and, in sample1 namespace, there is only one service account which is ‘default’ service account.


srinman_10-1670632496578.png


 


k get deploy -n app-routing-system -o yaml | grep -i serviceaccountname


This confirms that Nginx is indeed using nginx service account and not default one in app-routing-system.


Let’s also inspect secrets in osm-system and app-routing-system namespaces. Note that there is no K8S TLS secret for talking to OSM.  


srinman_11-1670632496581.png


 


At this point, you have an ingress controller installed, OSM installed, sample1 and sample2 added to OSM, app deployed in sample1 namespace but there is no configuration defined yet for routing traffic from ingress controller to application.   In the diagram, you can imagine that there is no connection #2 from ingress to workload in mesh.


 


 


User configuration in Ingress


 


We need to configure app-routing-system, our managed add-on, to listen for inbound traffic as known as north-south traffic and where to proxy connection to.  This is done with ‘Ingress’ object in Kubernetes.  Please notice some special annotations in Ingress definition.  These annotations are needed for proxying connection to an application that is part of OSM.


k apply -f  ingress-sample1.yaml


srinman_12-1670632496585.png


 


 


Once this is defined, you can view nginx.conf updated with this ingress definition.


k exec nginx-6c6486b7b9-kg9j4 -n app-routing-system -it – sh


cat nginx.conf


srinman_13-1670632496591.png


 


We’ve verified the configuration for ‘Web app routing’ to listen and proxy traffic to aks-helloworld-svc service in namespace sample1.  In diagram, #A configuration is complete for our traffic to sample1 namespace. If the configuration is a simple Ingress definition without any special annotations and if the target workload is not added to OSM namespace, we should be able to route north-south traffic into our workload by this time but that’s not the case with our definition.  We need to configure OSM to accept connections from our managed Ingress controller.


 


 


User configuration in OSM


 


Let’s review OSM mesh configuration. You can notice that spec.certificate doesn’t have ingressGateway section.


srinman_14-1670632496606.png


 


kubectl edit meshconfig osm-mesh-config -n osm-system


add ingressGateway section as defined below


srinman_15-1670632496608.png


 


  certificate:


   ingressGateway:


     secret:


       name: nginx-client-cert-for-talking-to-osm


       namespace: osm-system


     subjectAltNames:


     – nginx.app-routing-system.cluster.local


     validityDuration: 24h


   certKeyBitSize: 2048


   serviceCertValidityDuration: 24h


 


Now, you can notice a new secret in osm-system.   OSM issues and injects this certificate in osm-system namespace.  Nginx is ready to use this certificate to initiate connection to OSM.  Before we go further into this blog, let’s understand a few important concepts in OSM. 


Open service mesh data plane uses ‘Envoy’ proxy (https://www.envoyproxy.io/).   This envoy proxy is programmed (in other words configured) by OSM control plane.  After adding sample1 and sample2 namespace and deploying sample1, you could have noticed two containers running in that pod.  One is our hello world app, other one is injected by OSM control plane with mutating webhook. It also injects init container which changes ip tables to redirect traffic.


 


Now that Envoy is injected,  it needs to be equipped with certificates for communicating with its mothership (OSM control plane) and for communicating with other meshed pods.  In order to address this, OSM injects two certificates. One is called ‘proxy certificate’ for ‘Envoy’ to initiate connection to OSM control plane (refer B in diagram) and another one is called ‘service certificate’ for pod-to-pod traffic (for meshed pods – in other words pods in namespaces that are added to OSM).  Service certificate uses the following for CN.  


 


<ServiceAccount>.<Namespace>.<trustdomain>


 


This service certificate is shared for pods that are part of same service. Hence, the name service certificate.  This certificate is used by ‘Envoy’ when initiating pod-to-pod traffic with mTLS.


srinman_16-1670632496611.png


 


As an astute reader, you might have noticed some specifics in our Ingress annotations.  It defines who the target is in proxy_ssl_name.  Here our target service is  default.sample1.cluster.local.


default is ‘default service account’, sample1 is namespace.  Remember, in OSM, it’s all based on identities.  


Get pod name, replace -change-here with pod name and run this following command to check this.


 


osm proxy get config_dump aks-helloworld-change-here -n sample1 | jq -r ‘.configs[] | select(.”@type”==”type.googleapis.com/envoy.admin.v3.SecretsConfigDump”) | .dynamic_active_secrets[] | select(.name == “service-cert:sample1/default”).secret.tls_certificate.certificate_chain.inline_bytes’ | base64 -d | openssl x509 -noout -text


 


You can see CN = default.sample1.cluster.local in the cert.   


 


We are also informing nginx to use secret from osm-system namespace called nginx-client-cert-for-talking-to-osm.  Nginx is configured to proxy connect to default.sample1.cluster.local with TLS secret nginx-client-cert-for-talking-to-osm.  If you inspect this TLS secret  (use instructions below if needed), you can see “CN = nginx.app-routing-system.cluster.local”


Extract cert info: use k get secret, use tls.crt data and base64 decode it, run openssl x509 -in file_that_contains_base64_decoded_tls.crt_data -noout -text


 


At this time, we have wired up everything from client to Ingress controller listening for connections, and Nginx is set to talk to OSM. 


However, Envoy proxy (OSM data plane) is still not configured to accept TLS connection from Nginx.


Any curl to mysite.srinman.com will result in error response.


HTTP/1.1 502 Bad Gateway


 


Please understand that we can route traffic all the way from client to ‘Envoy’ running alongside our application container but since traffic is forced to enter ‘Envoy’ with our init container setup, Envoy checks and blocks this traffic.   With our configuration osm.enablePermissiveTrafficPolicy=true, Envoy is programmed by OSM to allow traffic within namespaces in the mesh but not from outside traffic to enter.  In other words, all east-west traffic is allowed within the mesh and these communications automatically establish mTLS between services.  Let’s configure OSM to accept this traffic.


 


This configuration is addressed by IngressBackend.  The following definition tells OSM to configure Envoy proxies used for backend service ‘aks-helloworld-svc’ to accept TLS connection from sources: defined.


 


More information about ingressbackend. 


https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/#https-ingress-mtls-and-tls


There are instructions in the link above for adding nginx namespace to osm. More specifically, the following command is not necessary since we’ve already configured Nginx with Ingress definition to use proxy ssl name and proxy ssl tls cert for connecting to application pod’s Envoy or OSM (#2 in the diagram.  Picture shows connection from only one pod from Nginx but you can assume that this could be from any Nginx pod).  OSM doesn’t need to monitor this namespace for our walk through.   However, at the end of this blog,  there is an additional information on how OSM is configured and how IngressBackend should be defined with managed OSM and Web app routing add-on.


 


osm namespace add “$nginx_ingress_namespace” –mesh-name “$osm_mesh_name” –disable-sidecar-injection


 


srinman_17-1670632496614.png


 


Earlier, we verified that Nginx uses with TLS cert with “CN = nginx.app-routing-system.cluster.local”.   IngressBackend configures that source must be ‘AuthenticatedPrincipal’ with name nginx.app-routing-system.cluster.local.  All others are rejected. 


 


Once this is defined, you should be able to see a successful connection to app!  Basically, client connection is terminated at Ingress controller (nginx) and proxied/resent (#2 in the diagram) from nginx to application pods in namespace (sample1). Envoy proxy is intercepting this connection and sending it to the actual application which is still listening on plain port 80 but our web application routing along with open service mesh took care of accomplishing encryption-in-transit between ingress controller and application pod – essentially mitigating the need for application teams to manage and own this very critical security functionality.   It’s important to remember that we were able to accomplish this mTLS with very few steps with all managed by AKS (well, provided you use add-ons for OSM and Web application routing).   Once the traffic lands in service meshed data plane, Open service mesh provides lots of flexibility and configuration options to manage this traffic (east-west) within the cluster across OSM-ed namespaces. 


 


srinman_18-1670632496622.png


 


Let’s try to break this again to understand more!


 


In our IngressBackend, let’s make a small change to the name of authenticated principal.  Change it to something other than nginx. Sample below.


  – kind: AuthenticatedPrincipal


    name: nginxdummy.app-routing-system.cluster.local


 


Apply this configuration. Attempt to connect to our service.


 


*   Trying 20.241.185.56:80…


* TCP_NODELAY set


* Connected to 20.241.185.56 (20.241.185.56) port 80 (#0)


> GET / HTTP/1.1


> Host: mysite.srinman.com


> User-Agent: curl/7.68.0


> Accept: */*



* Mark bundle as not supporting multiuse


< HTTP/1.1 403 Forbidden


< Date: Fri, 25 Nov 2022 13:09:16 GMT


< Content-Type: text/plain


< Content-Length: 19


< Connection: keep-alive



* Connection #0 to host 20.241.185.56 left intact


RBAC: access denied


 


This means that we’ve defined OSM to accept connections from identity nginxdummy from app-routing-system namespace but that’s not the case in our example.  Envoy basically stops connection in the same application pod before it reaches the application container itself.


 


Let’s try to make it work by not reverting the change but by changing a different config in IngressBackend


skipClientCertValidation: true


 


It should work fine now since we are configuring OSM to ignore client certification validation/verification.  From a security viewpoint, if you think about this, you could send traffic from a different app or ingress controller to this application pod – basically unprotected. Let’s change this back to false and also fix nginx service name.  Apply the config and check if you can access the service.


 


Thus far, we’ve deployed an application in one namespace and configured ingress controller to send traffic into our mesh. What would the process for another app in a different namespace using our managed ingress controller?


Let’s create another workload and understand how to define ingress and to understand the importance of service account.  Sample code in deploy-sample2.yaml


In this deployment, you can see that we are using serviceAccountName: sample2-sa   not the default service account.   (Namespace, Service account creation is not shown and its implicit that you understand!)


srinman_19-1670632496629.png


 


 


You can see how ingress definition is slightly different from the one above (for sample1).  proxy_ssl_name is set to sample2-sa in sample2 namespace. However, it uses the same TLS secret that sample1 used, which is TLS with “CN = nginx.app-routing-system.cluster.local”


srinman_20-1670632496636.png


 


 


Ingressbackend definition looks like this below. You can see that it’s the same ‘sources’ definition with different backends.


srinman_21-1670632496641.png


 


 


We have established TLS between Nginx and application pod  (#2 in diagram). However, from client to ingress is still plan HTTP (#1  in diagram).  Enabling TLS for this is straightforward and there are few ways to do this including Azure managed way, but we will explore build our own approach.  Let’s create a certificate with CN=mysite.srinman.com.


 


openssl req -new -x509 -nodes -out aks-ingress-tls.crt -keyout aks-ingress-tls.key -subj “/CN=mysite.srinman.com” -addext “subjectAltName=DNS:mysite.srinman.com”


 


Use this command below to upload this cert to K8S secret in sample1 namespace.


k create secret tls mysite-tls –key aks-ingress-tls.key –cert aks-ingress-tls.crt -n sample1


sample code in ingress-sample1-withtls.yaml


srinman_22-1670632496646.png


 


 


This should enforce all calls to https from the client.


srinman_23-1670632496650.png


 


 


 


Traffic flow


 


 


srinman_1-1670633824742.png


 


 


Traffic enters ingress managed LB


TLS traffic terminated at Ingress controller pods


Ingress controller pods initiates proxy connection to backend service (specifically one of the pod that is part of that service, and even more specifically to pod’s envoy proxy container. Also remember injected init-container takes care of setting up ip tables to route requests to Envoy)


App pod – Envoy container terminates TLS traffic and initiates connection to localhost on app port (remember app container shares same pod, thus same network namespace)


App pod – app container listening on port, responds to the request.


 


As traffic enters the cluster, as seen above and in the diagram, it can be inspected in 3 different logs at least.  Nginx, Envoy and App itself. 


 


Check traffic in nginx logs 


Check traffic in envoy logs


Check traffic in app logs


 


Nginx log (you might want to check both the pods  if you are not able to locate the call in one.  There should be two)


nn.nn.nnn.nnn – – [20/Nov/2022:17:29:11 +0000] “GET / HTTP/2.0” 502 150 “-” “curl/7.68.0” 33 0.006 [sample1-aks-helloworld-svc-80] [] 10.244.1.13:80, 10.244.1.13:80, 10.244.1.13:80 0, 0, 0 0.000, 0.000, 0.004 502, 502, 502 3f9a310a3ebb314342b590dde11


 


Envoy log


Just to keep it simple, reduce replicas to 1  – to probe envoy side car.    Replace podname with your pod in the command below.


k logs aks-helloworld-65ddbc869b-t8hwq -c envoy -n sample1


copy the output to jsonformatter


You can see the traffic flowing through the proxy into application pod.


srinman_25-1670632496675.png


 


 


App log


Let’s look at app container itself.


k logs aks-helloworld-65ddbc869b-bt9w8 -c aks-helloworld -n sample1


[pid: 13|app: 0|req: 1/1] 127.0.0.1 () {48 vars in 616 bytes} [Sun Nov 20 16:53:54 2022] GET / => generated 629 bytes in 12 msecs (HTTP/1.1 200) 2 headers in 80 bytes (1 switches on core 0)


127.0.0.1 – – [20/Nov/2022:16:53:54 +0000] “GET / HTTP/1.1” 200 629 “-” “curl/7.68.0” “nn.nn.nnn.nnn”


 


 


You could notice that request is coming from local host. This is because envoy container sends the traffic from the same host (actually pod – remember pod is same as host in Kubernetes world!  “A Pod models an application-specific “logical host” – reference link ).   


 


Lastly, when you opt-in for OSM add-on along with Web application routing add-on, certain things are already taken care of; for example, TLS secret osm-ingress-client-cert is generated and written to kube-system namespace.  It also automatically adds app-routing-system namespace to OSM with sidecar-injection disabled mode. This means that in the IngressBackend definition  kind: Service can be added for verifying source IPs in addition to identity (AuthenticatedPrincipal) for allowing traffic.  This of course adds more protection. Check this file ingressbackend-for-osm-and-webapprouting.yaml in repo.


 


I hope that these manual steps helped to provide a bit more insight into the role of Web application routing and how it works nicely with Open Service Mesh.  We also reviewed a few foundational components of Web application routing such as Nginx, IngressBackend,  Envoy and OSM.


 


Please check srinman/webapproutingwithosm (github.com) for sample code.