Best practices to harden your AKS environment

This article is contributed. See the original author and article here.



AKS takes more and more space in the Azure landscape, and there are a few best practices that you can follow to harden the environment and make it as secure as possible. As a preamble, remember that containers all share the kernel through system calls, so the level of isolation in the container world is not as strong as with virtual machines, and even more as with physical hosts. Mistakes can quickly lead to security issues.


1. Hardening the application itself

This might sound obvious but one of the best ways to defend against malicious attacks, is to use bullet proof code. There is no way you’ll be 100% bullet proof, but a few steps can be taken to maximize the robustness:


  • Try to use up-to-date libraries in your code (NuGet, npm, etc.), because as you know, most of your code is actually not yours.

  • Make sure that any input is validated, any memory allocation is well under control, should you not use frameworks with managed memory. Many vulnerabilities are memory-related (Buffer overflow, Use-after-free, etc.).

  • Rely on well-known security standards and do not invent your own stuff.

  • Use SAST tools to perform static code analysis using specialized software such as Snyk, Fortify, etc.

  • Try to integrate security-related tests in your integration tests


2. Hardening container images

I’ve seen countless environments where the docker image itself is not hardened properly. I wrote a full blog post about this, so feel free to  to read it I took an ASP.NET code base as an example, but this is applicable to other languages. I’ll summarize it here, in a nutshell:

  • Do not expose ports below 1024, because this requires extra capabilities

  • Specify another user than root

  • Change ownership of the container’s file system


3. Scanning container images

Most of the times, we are using base images to build our own images, and most of the times, these base images have vulnerabilities. Use specialized software such as Snyk, Falco, Cloud Defender for Containers, etc. to identify them.  Once identified, you should:

  • Try to stick to the most up-to-date images as they often include security patches

  • Try to use a different base image. Usually light images such as Alpine-based ones are a good start because they embed less tools and libraries, so are less likely to have vulnerabilities.

  • Make a risk assessment against the remaining vulnerabilities and see if that’s really applicable to your use case. A vulnerability does not automatically mean that you are at risk. You might have some other mitigations in place that would prevent an exploit.

To push the shift left principle to the maximum, you can use Snyk’s docker scan operation, right from the developer’s machine to already identify vulnerabilities. Although Snyk is a paid product, you can scan a few images for free. 

4. Hardening K8s deployments

In the same post as before (, I also explain how to harden the K8s deployment itself. In a nutshell,


  • Make sure to drop all capabilities and only add the needed ones if any

  • Do not use privileged containers nor allow privilege escalation (make values explicit)

  • Try to stick to a read only file system whenever possible

  • Specify user/group other than root


5. Request – Limits declaration

Although this might not be seen as a potential security issue, not specifying memory requests and limits can lead to an arbitrary eviction of other pods. Malicious users can take advantage of this to spread chaos within your cluster. So, you must always declare memory request and limits. You can optionally declare CPU requests/limits but this is not as important as memory.



6. Namespace-level logical isolation

K8s is a world where logical isolation takes precedence over physical isolation. So, whatever you do, you should make sure to adhere to the least privilege principle through proper RBAC configuration and proper network policies to control network traffic within the cluster, and potentially going outside (egress). Remember that by default, K8s is totally open, so every pod can talk to any other pod, whatever namespace it is located in. If you can’t live with internal logical isolation only, you can also segregate workloads into different node pools and leverage Azure networking features such as NSGs to control network traffic at another level. I wrote an entire blog post on this: AKS, the elephant in the hub & spoke room, deep dive



  6.1 RBAC

Role-based access control can be configured for both humans and systems, thanks to Azure AD and K8s RBAC. There are multiple flavors available for AKS. Whichever one you use, you should make sure to:

  • Define groups and grant them permissions using K8s roles

  • Define service accounts and let your applications leverage them

  • Prefer namespace-scoped permissions rather than cluster-scope ones

  6.2 Namespace-scoped & global network policies

Traffic can be controlled using plain K8s network policies or tools such as Calico. Network policies can be used to control pod-level ingress/egress traffic.


7. Layer 7 protection

Because defense-in-depth relies on multiple ways to validate whether an ongoing operation is legal or not, you should also use a layer-7 protection, such as a Service Mesh or Dapr, which has some overlapping features with service meshes. The main difference between Dapr and a true Service Mesh is that applications using Dapr must be Dapr-aware while they don’t need to know anything about a service mesh. The purpose of a layer-7 protection is to enable mTLS and fine-grained authorizations, in order to specify who can talk to who (on top of network policies). Most solutions today allow for fine-grained authorizations targeting operation-level scopes, when dealing with APIs. Dapr and Service Meshes come with many more juicy features that make you understand what a true Cloud native environment is.


8. Azure Policy

Azure Policy is the corner stone of a tangible governance in Azure in general, and AKS makes no exception. With Azure Policy, you’ll have a continuous assessment of your cluster’s configuration as well as a way to control what can be deployed to the cluster. Azure Policy leverages Gatekeeper to deny non-compliant deployments. You can start smoothly in non-production by setting everything to Audit mode and switch to Deny in production. Azure Policy also allows you to whitelist known registries to make sure images cannot be pulled from everywhere.


9. Cloud Defender for Containers

Microsoft recently merged Defender for Registries and Defender for Kubernetes into Defender for Containers. There is a little bit of overlap with Azure Policy, but Defender also deploys DaemonSets that check for real-time threats. All incidents are categorized using the popular MITRE ATT&CK framework. One of the selling point is that Defender can handle any cluster, whether hosted on Azure or not. So, it is a multi-cloud solution. On top of assessing configuration and threats, Defender also ships with a built-in image scanning process leveraging Qualys behind the scenes. Images are scanned upon push operations as well as continuously to detect newer vulnerabilities that came after the push. Of course, there are other third party tools available such as Prisma Cloud, which you might be tempted to use, especially if you already run the Palo Alto NVAs. 


10. Private API server

This one is an easy one. Make sure to isolate the API server from internet. You can easily do that using Azure Private Link. If you can’t do it for some reasons, try to at least restrict access to authorized IP address ranges.


11. Cluster boundaries

Of course, an AKS cluster is by design inside an Azure Virtual Network. The cluster can expose some workloads outside through the use of an ingress controller, and anything can potentially go outside of the cluster, through an egress controller and/or an appliance controlling the network traffic.


  11.1 Ingress

Ingress can either be internet-facing callers or internal callers. A best practice is to isolate the AKS ingress controller (NGINX, Traefik, AGIC, etc.) from internet. You link it to an internal load balancer. Traffic that must be exposed to internet should be exposed through an Application Gateway, Front Door (using Private Link Service) or any other well-known non-Azure solution such as Barracuda, F5 etc.  You should also distinguish pure UI traffic from API traffic. API traffic should also be filtered using an API gateway such as Azure APIM, Kong, Ambassador, etc. For “basic” scenarios, you might also offload JWT token validation to service meshes, but they will not have comparable features. You should for sure consider real API gateways for internet-facing APIs.


  11.2 Egress

Pod-level egress traffic can be controlled by network policies or Calico, but also by most Service Meshes. Istio has even a dedicated egress controller, which can act as a proxy. On top of handling egress from within the cluster itself, it is a best practice to have a next-gen firewall waiting outside, such as Azure Firewall or third-party Network Virtual Appliances (NVA).


12. Keep consistence across clusters and across data centers

You start with one cluster, then 2, then a hundred. To keep some sort of consistency across cluster configurations, you can leverage Azure Policy. If your clusters are using on-premises or in another cloud, you can also use Azure Arc. Microsoft recently launched Azure Kubernetes Fleet Manager, which I haven’t tried yet but is surely something to keep an eye on.



The above tips are by no means exhaustive but if you start with that, you should be in a better position when it comes to handling container security. There are a myriad of tools available on the market to better handle container security. Azure has some built-in capabilities and it is up to you to see if you prefer to use best of breed or best of suite. Note that more and more Azure native tools span beyond Azure itself, so your single pane of glasses could be Azure.

CISA Has Added One Known Exploited Vulnerability to Catalog

This article is contributed. See the original author and article here.

CISA has added one new vulnerability to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation. This type of vulnerability is a frequent attack vector for malicious cyber actors and pose significant risk to the federal enterprise. Note: To view the newly added vulnerabilities in the catalog, click on the arrow in the “Date Added to Catalog” column, which will sort by descending dates.

Binding Operational Directive (BOD) 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities established the Known Exploited Vulnerabilities Catalog as a living list of known CVEs that carry significant risk to the federal enterprise. BOD 22-01 requires FCEB agencies to remediate identified vulnerabilities by the due date to protect FCEB networks against active threats. See the BOD 22-01 Fact Sheet for more information.

Although BOD 22-01 only applies to FCEB agencies, CISA strongly urges all organizations to reduce their exposure to cyberattacks by prioritizing timely remediation of Catalog vulnerabilities as part of their vulnerability management practice. CISA will continue to add vulnerabilities to the Catalog that meet the specified criteria.

The Dynamics 365 Business Central Universal Code initiative is live

The Dynamics 365 Business Central Universal Code initiative is live

This article is contributed. See the original author and article here.

Last April I shared that Microsoft is working on an initiative that encourages partners to invest in a cloud-first strategy. Today, we are excited to announce that the Business Central Universal Code initiative went into effect with the launch of Dynamics 365 Business Central 2022 release wave 2.

The Universal Code initiative is designed to encourage the use of a modern architecture in customer implementations of Business Central. It gives all on-premises customers the choice to select a cloud (SaaS) implementation when desired while also finding the right apps on the Microsoft AppSource marketplace. The initiative reduces the friction around potentially complex, lengthy, and expensive upgrades and frees up partner capacity over time. Partners can use the additional capacity for activities beyond (re)implementing customizations, providing more value to their customers.

Microsoft partners share the impact of Universal Code

On AppSource, you can easily discover the success of our modern Universal Code initiative. As of October 2022, more than 2,800 Business Central apps are available to respond to the unique requirements of customers. Our partner channel is sharing the positive impact a modern architecture has on their business:

“It was scary to change our industry solution from a customized code to Universal Code as we didn’t want to compromise its rich functionality, but our team succeeded faster than expected because of their great expertise and out-of-the-box thinking. The impact has been enormous!  Today, we are able to serve 14 localizations through fully automated means and we are able to generate weekly releases. In the past this took us a month of manual work. Universal Code in combination with our tooling is providing us the agility to stay in front!”

Richard Postborg, CTO, TRIMIT Group A/S

“For us here at LS Retail, Universal Code is all about sustainability for the customer. With Universal Code and the move to the extensibility framework, customers can upgrade their environments with a fraction of the effort it required before. This is good for everyone involved. The customer can stay current with a minimal effort. The partners can add value in other areas, such as providing business insights. This is a win-win for everyone involved.”

Dadi Karason, CTO, LS Retail

The future of Business Central on-premises is Universal Code

The modern architectural choice of Universal Code is key to the success of our customers, partners, and Microsoft. We encourage customers to have the Universal Code conversation with their implementing partner.

As of October 2022, new Dynamics 365 Business Central customers deploying on-premises and customers transitioning to Dynamics 365 Business Central on-premises deployments will have to deploy a “cloud-optimized extensions” architecture (Universal Code) or license payable modules that unlock classic customization behavior.

Learn more

Find supporting materials with details about the Universal Code initiative at

Partners can also learn more about next steps by watching the Universal Code session at the Dynamics 365 Business Central virtual launch event. Register to watch on-demand at

The post The Dynamics 365 Business Central Universal Code initiative is live appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Catalog Adoption: Discover more with Data estate insights in Microsoft Purview

Catalog Adoption: Discover more with Data estate insights in Microsoft Purview

This article is contributed. See the original author and article here.

Adoption and usage of data governance tools are critical and lack of user engagement can be a serious blocker for the whole organization in its data governance journey. When it comes to solution adoption, fortunately Microsoft Purview comes with the built-in ability to analyze it.


This functionality is very useful to answer the following questions:

  • Are users actively using Microsoft Purview?

  • How is usage changing over time?

  • What is activity type e.g., data curation or search data?  

  • Which assets are the most viewed ones in an organization?

  • What are we missing in the catalog?


How to track the adoption?

Adoption tracking is part of Data estate insights functionality in Microsoft Purview. To be able to use it, the user needs to have appropriate permissions assigned. There is a dedicated Insights Reader role that can be assigned to any Data Map user, by the Data Curator of the root collection. More information about required permissions can be found in Permissions for Data Estate Insights in Microsoft Purview – Microsoft Purview | Microsoft Docs.


Let’s start with some basics

Going into the Insights area and choosing Catalog adoption, we can find information about monthly active users.


Catalog adoption - active users.png


In our case, we can see that currently we have 254 distinct users and the number dropped 7% in the last month. Microsoft Purview counts active users as a user who took at least one intentional action across all feature categories in the Data Catalog within a 28-day period. It’s also possible to determine how active our users are in total as Microsoft Purview aggregates number of total searches performed by users


Catalog adoption - total serach.png



Data estate insights functionality in Microsoft Purview shows information based on user permissions, which means data seen in Insights is limited to collections to which the user has permission to access. In this case, the user used to see insights has access to all collections, meaning the information visible in the catalog adoption is the overall number of users in the organization.


Even more information about catalog users

More adoption data means more insights into how the catalog is used.


Catalog adoption - active users by.png


This option shows the breakdown of active users by feature category. Feature category was divided into:


  • All  (which covers all kinds of users)

  • Search and browse (which indicates users who are reading data from the catalog by searching them or directly browsing the catalog assets)

  • Asset curation (activities related to data curation like assigning data owner, description, applying classification, etc.)

Information on the chart can be shown in Daily/Weekly/Monthly time range.


Increase catalog adoption by giving users more precise information…

Among the information that you get as part of adoption reports is information about which assets are the most viewed in the organization. If you are wondering why it is important to have a look at the following summary:


Catalog adoption - curation.png


The most viewed asset (231 views) “TicketReportTable” is fully curated (more about curation in the 2nd part of the article) which means the asset has an assigned owner, description, and at least one classification. On the other hand, the 2nd most viewed asset (136 views in last 30 days) “YearlySalesBySegment” is not curated at all. This can lead to situations where users are accessing catalogs and get poor-quality information. As a result, users may step back from using data catalog and adoption will be dropping. Based on such insights you can intensively work on asset curation and only provide users with high-quality information about data in your organization.


Adoption insights available in Microsoft Purview also give the ability to identify the most searched keywords.


Catalog adoption - key words.png


It is interesting that one of the most searched assets is only partially curated. Based on this information it is possible to help data stewards and owners set priorities and identify the most important areas in an organization. On the other hand, it’s also possible to get information about keywords that were searched by users but yielded no results.

Catalog adoption - top search.png


In this example, it looks like users are looking for information related to “sales” and couldn’t find it. This is an important tip for a data governance team and shows the next possible areas to investigate.



Now you should have a better understanding of how to identify the progress of Microsoft Purview adoption, You should also have learned how to improve it by converting provided insights into actions, like a better data curation process or by adding new assets to your catalog, which are searched by users.