MVP’s Favorite Content: Windows, Generative AI, M365, Azure

MVP’s Favorite Content: Windows, Generative AI, M365, Azure

This article is contributed. See the original author and article here.

In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.


 


Maison da Silva, Windows and Devices MVP, Brazil


Maison da Silva.jpg


create partition primary | Microsoft Learn


Resize-Partition (Storage) | Microsoft Learn


“Help technical users who had problem installing update KB5034441 2024-01-09 “0x80070643 – ERROR_INSTALL_FAILURE” Workaround: It might be necessary to increase the size of the WinRE partition in order to avoid this issue and complete the installation.”


*Relevant Blog: Redimensionamento Manual da Partição para Instalação da Atualização WinRE: Um Guia Passo a Passo – Maison da Silva


 


Tso Kar Ho, Microsoft Azure MVP, Hong Kong SAR


MVP Ejoe Tso.jpg


18 Lessons, Get Started Building with Generative AI


“The content in this repository is highly valuable for beginners, as it not only introduces the concepts of generative AI but also provides hands-on examples and code snippets to help users understand and implement the techniques. The lessons cover a wide range of topics, including language models, semantic search, transformers, and more, giving learners a holistic understanding of generative AI.


Moreover, the repository is actively maintained and has received significant community engagement, with over 23.6k stars and 15k forks. This level of community activity demonstrates the value and popularity of the content. Additionally, the presence of 50 contributors indicates a collaborative environment where users can benefit from the expertise and insights of others in the field.


 


I have personally found this content to be highly informative and well-structured, making it an ideal resource for individuals looking to explore generative AI. I believe that this repository will greatly benefit those who are new to the field and provide them with a solid foundation to build upon.


 


As an MVP, I have written a blog post discussing the significance of generative AI and its potential applications in healthcare. In the blog post, I have highlighted the “Generative AI for Beginners” repository as an excellent starting point for individuals interested in learning more about this field. I have shared practical examples and insights from the repository to showcase the practicality and versatility of generative AI in the healthcare domain.


 


Additionally, I have organized a virtual event for the local developer community, where I conducted a workshop using the lessons from the “Generative AI for Beginners” repository. The event aimed to introduce beginners to the world of generative AI and provide them with hands-on experience in building their own models. The event received positive feedback, with participants expressing their appreciation for the comprehensive and beginner-friendly content provided by Microsoft.


 


Tetsuro Takao, Developer Technologies MVP, Japan


Tetsuro Takao.jpg


Microsoft 365 guidance for security & compliance – Service Descriptions | Microsoft Learn


“This is a content that compiles all the installation methods for implementing Microsoft 365 security and compliance, and can be used as a reference when writing work or blog articles. It is also a very useful content for planning and proposals, as it serves as a guideline for design and includes license information, which is helpful when planning the order of implementation and construction.”


(In Japanese: Microsoft 365のセキュリティ、コンプライアンスの実装についてすべての設置方法がまとまっており、仕事やブログ記事を書く際にリファレンス的に参照することが可能なコンテンツ。企画、提案の際にも設計の指針になり、ライセンス情報も同コンテンツに記載があるため導入構築順などを計画する際にも役立つ非常に有用なコンテンツ。)


*Relevant Activity: .NETラボ – connpass


 


Sou Ishizaki, Microsoft Azure MVP, Japan


Sou Ishizaki.jpg


Connecting to Azure Services on the Microsoft Global Network – Microsoft Community Hub


“This is a valuable article that delves deeper into the Microsoft Global Network and provides an answer to the question, ‘Does this connection architecture connect to the internet or not?”


(In Japanese: Microsoft Global Network について一歩踏み込んで「この接続アーキテクチャはインターネットに出るのか否か」に答えを与えてくれる、ありがたい記事です。)

Power Platform Partners – We want to hear from you!

This article is contributed. See the original author and article here.

Power Platform and Low Code are a fast-growing practice area for many partners and we want to hear from you about how you are building and expanding this practice and the type of services you offer, so that we can share back broad trends and insights with the partner community.  


 


We are currently conducting a survey to identify and define the value partners realize through their relationship with Microsoft and Power Platform. This research is being done by IDC’s Channels and Alliances Research Group on behalf of Microsoft. Please use this link to take the survey: https://aka.ms/PowerPartnerProfitabilitySurvey. It takes approximately 10-15 minutes and we recommend that it is completed by your Power Platform practice lead. The questions included are related to Microsoft and Power Platform revenue growth, profit across resale, services and/or software, investments in your Power Platform practice, and Microsoft activities/programs that drive success.


 


We’re interested in learning about your practice development and the profitability of your Power Platform business. The information you provide will be aggregated and used in an IDC eBook, and will help Microsoft improve its partner strategy and programs related to Power Platform.


 


The deadline to submit is February 29. Thank you!

Azure API Center: The First Look

Azure API Center: The First Look

This article is contributed. See the original author and article here.

Let’s say you work for a company that has a lot of APIs. You might have a few questions:


 



  • How do you manage the lifecycle of those APIs?

  • What governance would you apply to these APIs?

  • What is the list of environments needed to manage these APIs?

  • What are the deployment strategies for those APIs?

  • How would you integrate those APIs with other services?


 


As your company’s number of APIs increases, so does the complexity of managing them. Azure API Center (APIC) is a central repository for your APIs lifecycle management, and it offers the more efficient ways for management. Throughout this post, I will take a first look at what Azure API Center is and what it offers.


 



You can find a sample code from this GitHub repository.



 


Prerequisites


 


There are a few prerequisites to use Azure APIC effectively:


 



 


API Center instance provisioning


 


There are three ways to provision an APIC instance:


 



 


I’m not going to discuss how to provision an APIC instance in this article. But here’s the reference you can do it by yourself through Bicep – Azure API Center Sample


 


Register APIs to APIC


 


The purpose of using APIC is to manage your company’s APIs in a centralised manner. From design to deployment, APIC tracks all the histories. To register your APIs to APIC, you can use either Azure CLI or Azure Portal.


 


Let’s say there’s a weather forecast API you have designed and developed. You have an OpenAPI document for the API, but not implemented yet. Let’s register the API to APIC.


 


 

az apic api register 
    -g "my-resource-group" 
    -s "my-api-center" 
    --api-location ./weather-forecast.json

 


 


Registering API through Azure CLI


 


If you want to register another API through the Azure Portal, you can do it by following the official documentation.


 


Registering API through Azure Portal


 


Import APIs from API Management to APIC


 


If you have already working APIs in Azure API Management (APIM), you can import them to APIC through Azure CLI. But it requires a few more steps to do so.


 




  1. First of all, you need to activate Managed Identity to the APIC instance. It can be either system identity or user identity, but I’m going to use the system identity for now.

    az apic service update 
        -g "my-resource-group" 
        -s "my-api-center" 
        --identity '{ "type": "SystemAssigned" }'
    



  2. Then, get the principal ID of the APIC instance.

    APIC_PRINCIPAL_ID=$(az apic service show 
        -g "my-resource-group" 
        -s "my-api-center" 
        --query "identity.principalId" -o tsv)
    



  3. Now, register the APIC instance to the APIM instance as an APIM reader.

    APIM_RESOURCE_ID=$(az apim show 
        -g "my-resource-group" 
        -s "my-api-center" 
        --query "id" -o tsv)
    
    az role assignment create 
        --role "API Management Service Reader Role" 
        --assignee-object-id $APIC_PRINCIPAL_ID 
        --assignee-principal-type ServicePrincipal 
        --scope $APIM_RESOURCE_ID
    



  4. And finally, import APIs from APIM to APIC.

    az apic service import-from-apim 
        -g "my-resource-group" 
        -s "my-api-center" 
        --source-resource-ids "$APIM_RESOURCE_ID/apis/*"
    


    Importing API from APIM




 


Now, you have registered and imported APIs to APIC. But registering those APIs to APIC does nothing to do with us. What’s next then? Let’s play around those APIs on Visual Studio Code.


 


View APIs on Visual Studio Code – Swagger UI


 


So, what can you do with the APIs registered and imported to APIC? You can view the list of APIs on Visual Studio Code. First, you need to install the Azure API Center extension on Visual Studio Code.


 


Once you install the extension, you can see the list of APIs on the extension. Choose one of the APIs and right-click on it. Then, you can see the context menu. Click on the Open API Documentation menu item.


 



 


You will see the Swagger UI page, showing your API document. With this Swagger UI, you can test your API endpoints.


 


Swagger UI


 


Test APIs on Visual Studio Code – Rest Client


 


Although you can test your API endpoints on the Swagger UI, you can also test them in a different way. For this, you need to install the Rest Client extension on Visual Studio Code.


 


After you install the extension, choose one of the APIs and right-click on it. Then, you can see the context menu. Click on the Generate HTTP File menu item.


 



 


Within the HTTP file, you can actually test your API endpoints with different payloads.


 


HTTP file


 


Generate client SDK on Visual Studio Code – Kiota


 


You can write up the client SDK by yourself. But it’s time consuming and fragile because the API can change at any time. But what if somebody or a tool creates the client SDK on your behalf?


 


One of the greatest features of this APIC extension offers is to generate client SDKs. You can generate the client SDKs for your APIs in different languages. Although the API itself has no implementation yet, you can still work with the client SDK because you know what you need to send and what you will receive in return through the SDK. For this, you need to install the Kiota extension on Visual Studio Code.


 


After you install the extension, choose one of the APIs and right-click on it. Then, you can see the context menu. Click on the Generate API Client menu item.


 


Generate API client


 


Because I have a Blazor web application, I’m going to generate a C# client SDK for the API. The Kiota extension finds out all the API endpoints from APIC. You can choose them all or just a few of them. Click the :play_button: button, and it generates the client SDK for you.


 


Kiota Explorer


 


Add the necessary information like class name and namespace of the client SDK, and output folder. Finally it asks in which language to generate the client SDK. There are currently 9 languages available for now. I’m going to choose C#.


 


Kiota - choose language


 


The Kiota extension then generates the client SDK into the designated directory.


 


API client SDK generated


 


Consume the generated client SDK within an application


 


Now, the client SDK has been generated by the Kiota extension from APIC to my Blazor application. Because it uses the Kiota libraries, I need to install the following Kiota NuGet packages to my Blazor web application.


 


 

dotnet add ./src/WebApp/ package Microsoft.Kiota.Http.HttpClientLibrary
dotnet add ./src/WebApp/ package Microsoft.Kiota.Serialization.Form
dotnet add ./src/WebApp/ package Microsoft.Kiota.Serialization.Json
dotnet add ./src/WebApp/ package Microsoft.Kiota.Serialization.Text
dotnet add ./src/WebApp/ package Microsoft.Kiota.Serialization.Multipart

 


 


Add dependencies to the Program.cs file and update the Home.razor file to consume the client SDK. Then you will be able to see the result.


 


Pet Store - available pets


 


Your web application as the API consumer works perfectly with the client SDK generated from APIC.


 




 


So far, I’ve walked through how Azure API Center can handle your organisation’s APIs as a central repository, and played around the APIC extension on VS Code. This post has shown you how to provision the APIC instance, register and import APIs in various ways, and how to test those APIs on VS Code and generate the client SDKs directly from VS Code.


 


As I mentioned in the beginning, taking care of many APIs in one place is crucial as your ogranisation grows up. You might think that you don’t need APIC if your organisation’s API structure is relatively simple. However, even if your organisation is small, APIC will give you better overview of APIs, and how they can interconnected with each other.


 


More about Azure API Center?


 


If you want to learn more about APIC, the following links might be helpful.


 



 


This article was originally published on Dev Kimchi.

Enable Change Tracking service for Arc Onboarded machines (Windows and Linux)

Enable Change Tracking service for Arc Onboarded machines (Windows and Linux)

This article is contributed. See the original author and article here.

Azure Arc is a multi-cloud and on-premises management platform that simplifies governance and management by delivering a consistent way to manage your entire environment together by projecting your existing non-Azure and or on-premises resources into Azure Resource Manager.


Azure Arc has benefited multiple customers by simplifying governance and management by delivering a consistent multi-cloud and on-premises management platform such as patch management using Azure Update Manager, enabling Security using Defender for cloud, Standardized role-based access control (RBAC), Change tracking etc. for resource types hosted outside of Azure such as Sever, Kubernetes, SQL Server etc. Today, we will discuss and enable Change Tracking service for Arc Onboarded devices. To know more about Azure arc benefits and Onboarding process refer to the link here.


Let’s look at what the change tracking service does before we activate it.


The Change Tracking and Inventory services track changes to Files, Registry, Windows Software, Linux Software (Software Inventory), Services and Daemons, also supports recursion, which allows you to specify wildcards to simplify tracking across directories.


Note: Earlier this feature gets enabled using Log Analytics (MMA Agent) and Azure Automation Account. Now this has been simplified with Azure Policy.


Let’s understand how to enable Change tracking and Inventory feature for Arc Onboarded device.


Note: Please make sure that the arc machines are registered, and their status is shown as connected before you turn on the feature, as seen below.


 


Anipriya_0-1708445940483.png


 


 


Go to Azure Policy then Definition and filter the category by Change tracking and Inventory. You need to enable all the built-in policies present in Enable change tracking Inventory for Arc enabled virtual machines initiatives for Arc enabled windows and Linux devices respectively.


 


Anipriya_1-1708445940495.png


 



  1. Assign Configure Windows Arc-enabled machines to install AMA for ChangeTracking and Inventory built-in policy (Scope it to Subscription of Arc Onboarded device). Make Sure you have unchecked the Parameter and verify Effect to DeployIfNotexist and create Remediation task. This will ensure existing resources can be updated via a remediation task after the policy is assigned. Similarly, Configure Linux Arc-enabled machines to install AMA for ChangeTracking and Inventory built-in policy for Arc Onboarded Linux devices. Once configured using Azure Policy, Arc machine will have AMA Agent deployed.


 



  1. Assign Configure Change Tracking Extension for Windows Arc machines built-in policy (Scope it to Subscription of Arc Onboarded device). Follow the same steps as mentioned in point 1. Similarly, Configure Change Tracking Extension for Linux Arc machines built-in policy for Arc Onboarded Linux devices. Once configured using Azure Policy, Arc machine will have change tracking extension deployed.


 


Anipriya_2-1708445940498.png


 



  1. Create data collection rule.
    a. Download CtDcrCreation.json file. Go to Azure portal and in the search, enter Deploy a custom template. In the Custom deployment page > select a template, select Build your own template in the editor. In the Edit template, select Load file to upload the CtDcrCreation.json file or just copy the json and paste the template. And select Save. In the Custom deployment > Basics tab, provide Subscription and Resource group where you want to deploy the Data Collection Rule. The Data Collection Rule Name is optional.


Anipriya_3-1708445940502.png


 


Anipriya_4-1708445940510.png


 


b. In the Custom deployment > Basics tab, provide Subscription and Resource group where you want to deploy the Data Collection Rule. The Data Collection Rule Name is optional. Workspace Resource ID of Log analytic Workspace. (You will get the workspace ID in the overview page of Log analytic workspace) .


Anipriya_5-1708445940516.png


 


c. Select Review+create > Create to initiate the deployment of CtDcrCreation. After the deployment is complete, select CtDcr-Deployment to see the DCR Name. Go to the newly created Data collection Rule (DCR) rule named (Microsoft Ct-DCR). Click on json view and copy the Resource ID.


 


Anipriya_6-1708445940523.png


 


 


Anipriya_7-1708445940527.png


 


Anipriya_8-1708445940530.png


 


d. Go to Azure Policy Assign [Preview]: Configure Windows Arc-enabled machines to be associated with a Data Collection Rule for ChangeTracking and Inventory built-in policy (Scope it to Subscription of Arc Onboarded device). Make Sure you have enabled the Parameter and paste the Resource ID captured above and create Remediation task. Similarly, Configure Linux Arc-enabled machines to be associated with a Data Collection Rule for ChangeTracking and Inventory built-in policy for Arc Onboarded Linux devices. Once configured using Azure Policy, Arc machine will have change tracking extension deployed.


 


Anipriya_9-1708445940533.png


After all the policies are configured and deployed. Go to the Arc device, you will be able to view the change tracking and Inventory is enabled.


 


Anipriya_10-1708445940538.png


 


Anipriya_11-1708445940544.png


 

Protecting Tier 0 the Modern Way

Protecting Tier 0 the Modern Way

This article is contributed. See the original author and article here.

How should your Tier 0 Protection look like?


DagmarHeidecker_0-1708345486896.png


 


Almost every attack on Active Directory you hear about today – no matter if ransomware is involved or not – (ab)uses credential theft techniques as the key factor for successful compromise. Microsoft’s State of Cybercrime report confirms this statement: “The top finding among ransomware incident response engagements was insufficient privilege access and lateral movement controls.”


 


Despite the fantastic capabilities of modern detection and protection tools (like the Microsoft Defender family of products), we should not forget that prevention is always better than cure (which means that accounts should be protected against credential theft proactively). Microsoft’s approach to achieving this goal is the Enterprise Access Model. It adds the aspect of hybrid and multi-cloud identities to the Active Directory Administrative Tier Model. Although first published almost 10 years ago, the AD Administrative Tier Model is still not obsolete. Not having it in place and enforced is extremely risky with today’s threat level in mind.


 


Most attackers follow playbooks and whatever their final goal may be, Active Directory Domain domination (Tier 0 compromise) is a stopover in almost every attack. Hence, securing Tier 0 is the first critical step towards your Active Directory hardening journey and this article was written to help with it.


 


AD Administrative Tier Model Refresher


The AD Administrative Tier Model prevents escalation of privilege by restricting what Administrators can control and where they can log on. In the context of protecting Tier 0, the latter ensures that Tier 0 credentials cannot be exposed to a system belonging to another Tier (Tier 1 or Tier 2).


 


Tier 0 includes accounts (Admins-, service- and computer-accounts, groups) that have direct or indirect administrative control over all AD-related identities and identity management systems. While direct administrative control is easy to identify (e.g. members of Domain Admins group), indirect control can be hard to spot: e.g. think of a virtualized Domain Controller and what the admin of the virtualization host can do to it, like dumping the memory or copying the Domain Controller’s hard disk with all the password hashes. Consequently, virtualization environments hosting Tier 0 computers are Tier 0 systems as well. This also applies to the virtualization Admin accounts.


 


The three Commandments of AD Administrative Tier Model


Rule #1: Credentials from a higher-privileged tier (e.g. Tier 0 Admin or Service account) must not be exposed to lower-tier systems (e.g. Tier 1 or Tier 2 systems).


Rule #2: Lower-tier credentials can use services provided by higher-tiers, but not the other way around. E.g. Tier 1 and even Tier 2 system still must be able to apply Group Policies.


Rule #3: Any system or user account that can manage a higher tier is also a member of that tier, whether originally intended or not.


DagmarHeidecker_0-1707459813959.png


 


Implementing the AD Administrative Tier Model


Most guides describe how to achieve these goals by implementing a complex cascade of Group Policies (The local computer configuration must be changed to avoid that higher Tier level administrators can expose their credentials to a down-level computer). This comes with the downside that Group Policies can be bypassed by local administrators and that the Tier Level restriction works only on Active Directory joined Windows computers. The bad news is that there is still no click-once deployment for Tiered Administration, but there is a more robust way  to get things done by implementing Authentication policies. Authentication Policies provide a way to contain high-privilege credentials to systems that are only pertinent to selected users, computers, or services. With these capabilities, you can limit Tier 0 account usage to Tier 0 hosts. That’s exactly what we need to achieve to protect Tier 0 identities from credential theft-based attacks.


 


To be very clear on this: With Kerberos Authentication Policies you can define a claim which defines where the user is allowed to request a Kerberos Granting Ticket from.


 


Optional: Deep Dive in Authentication Policies


Authentication Policies are based on a Kerberos extension called FAST (Flexible Authentication Secure Tunneling) or Kerberos Armoring. FAST provides a protected channel between the Kerberos client and the KDC for the whole pre-authentication conversation by encrypting the pre-authentication messages with a so-called armor key and by ensuring the integrity of the messages.


Kerberos Armoring is disabled by default and must be enabled using Group Policies. Once enabled, it provides the following functionality:



  • Protection against offline dictionary attacks. Kerberos armoring protects the user’s pre-authentication data (which is vulnerable to offline dictionary attacks when it is generated from a password).

  • Authenticated Kerberos errors. Kerberos armoring protects user Kerberos authentications from KDC Kerberos error spoofing, which can downgrade to NTLM or weaker cryptography.

  • Disables any authentication protocol except Kerberos for the configured user.

  • Compounded authentication in Dynamic Access Control (DAC). This allows authorization based on the combination of both user claims and device claims.


The last bullet point provides the basis for the feature we plan to use for protecting Tier 0: Authentication Policies.


 


Restricting user logon from specific hosts requires the Domain Controller (specifically the Key Distribution Center (KDC)) to validate the host’s identity. When using Kerberos authentication with Kerberos armoring, the KDC is provided with the TGT of the host from which the user is authenticating. That’s what we call an armored TGT, the content of which is used to complete an access check to determine if the host is allowed.


 


DagmarHeidecker_0-1707463029173.png


Kerberos armoring logon flow (simplified):



  1. The computer has already received an armored TGT during computer authentication to the domain.

  2. The user logs on to the computer:

    1. An unarmored AS-REQ for a TGT is sent to the KDC.

    2. The KDC queries for the user account in Active Directory and determines if it is configured with an Authentication Policy that restricts initial authentication that requires armored requests.

    3. The KDC fails the request and asks for Pre-Authentication.

    4. Windows detects that the domain supports Kerberos armoring and sends an armored AS-REQ to retry the sign-in request.

    5. The KDC performs an access check by using the configured access control conditions and the client operating system’s identity information in the TGT that was used to armor the request. If the access check fails, the domain controller rejects the request.



  3. If the access check succeeds, the KDC replies with an armored reply (AS-REP) and the authentication process continues. The user now has an armored TGT.


Looks very much like a normal Kerberos logon? Not exactly: The main difference is the fact that the user’s TGT includes the source computer’s identity information. Requesting Service Tickets looks similar to what we described above, except that the user’s armored TGT is used for protection and restriction.


 


Implementing a Tier 0 OU Structure and Authentication Policy


The following steps are required to limit Tier 0 account usage (Admins and Service accounts) to Tier 0 hosts:



  1. Enable Kerberos Armoring (aka FAST) for DCs and all computers (or at least Tier 0 computers).

  2. Before creating an OU structure similar to the one pictured below, you MUST ensure that Tier 0 accounts are the only ones having sensitive permissions on the root level of the domain. Keep in mind that all ACLs configured on the root-level of fabrikam.com will be inherited by the OU called “Admin” in our example.


DagmarHeidecker_0-1707463188790.png


 


3. Create the following security groups:


– Tier 0 Users


– Tier 0 Computer


4. Constantly update the Authentication policy to ensure that any new T0 Admin or T0 service account is covered.


5. Ensure that any newly created T0 computer account is added to the T0 Computers security group.


6. Configure an Authentication Policy with the following parameters and enforce the Kerberos Authentication policy:















(User) Accounts Conditions (Computer accounts/groups) User Sign On
T0 Admin accounts

(Member of each({ENTERPRISE DOMAIN CONTROLLERS}) Or Member of any({Tier 0 computers (FABRIKAMTier 0 computers)}))


Kerberos only

 


The screenshot below shows the relevant section of the Authentication Policy:


DagmarHeidecker_0-1707463910563.png


 


Find more details about how to create Authentication Policies at https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/how-to-configure-protected-accounts#create-a-user-account-audit-for-authentication-policy-with-adac.


 


Tier 0 Admin Logon Flow: Privileged Access Workstations (PAWs) are a MUST


As explained at the beginning of the article, attackers can sneak through an open (and MFA protected) RDP connection when the Admin’s client computer is compromised. To protect from this type of attack Microsoft has been recommending using PAWs since many years.


 


In case you ask yourself, what the advantage of restricting the source of a logon attempt through Kerberos Policies is: Most certainly you do not want your T0 Admins to RDP from their – potentially compromised – workplace computers to the DC. Instead, you want them to use a Tier 0 Administrative Jump Host or – even better – a Privileged Access Workstation. With a compromised workplace computer as a source for T0 access it would be easy for an attacker to either use a keylogger to steal the T0 Admin’s password, or to simply sneak through the RDP channel once it is open (using a simple password or MFA doesn’t make a big difference for this type of attack). Even if an attacker would be able to steal the credential of a Tier 0 user, the attacker could use those credentials from a computer which is defined in the claim. On any other computer, Active Directory will not approve a TGT, even if the user provides the correct credentials. This will give you the easy possibility to monitor the declined requests and react properly.


 


There are too many ways of implementing the Tier 0 Admin logon flow to describe all of them in a blog. The “classic” (some call it “old-fashioned”) approach is a domain-joined PAW which is used for T0 administrative access to Tier 0 systems.


 


DagmarHeidecker_0-1707468206441.png


The solution above is straightforward but does not provide any modern cloud-based security features.


“Protecting Tier 0 the modern way” not only refers to using Authentication Policies, but also leverages modern protection mechanisms provided by Azure Entra ID, like Multi-Factor-Authentication, Conditional Access or Identity Protection (to cover just the most important ones).


 


Our preferred way of protecting the Tier 0 logon flow is via an Intune-managed PAW and Azure Virtual Desktop because this approach is easy to implement and perfectly teams modern protection mechanisms with on-premises Active Directory:


DagmarHeidecker_1-1707468256678.png


 


Logon to the AVD is restricted to come from a compliant PAW device only, Authentication Policies do the rest.


 


Automation through PowerShell


Still sounds painful? While steps 1 – 3 (enable Kerberos FAST, create OU structure, create Tier 0 groups) of Implementing a Tier 0 OU Structure and Authentication Policy are one-time tasks, step 4 and 6 (keep group membership and Authentication policy up-to-date)  have turned out to be challenging in complex, dynamic environments. That’s why Andreas Lucas (aka Kili69) has developed a PowerShell-based automation tool which …



  • creates the OU structure described above (if not already exists)

  • creates the security groups described above (if not already exist)

  • creates the Authentication policy described above (if not already exists)

  • applies the Tier 0 authentication policy to any Tier 0 user object

  • removes any object from the T0 Computers group which is not located in the Tier 0 OU

  • removes any user object from the default Active directory Tier 0 groups, if the Authentication policy is not applied (except Built-In Administrator, GMSA and service accounts)


 


Additional Comments and Recommendations


Prerequisites for implementing Kerberos Authentication Policies


Kerberos Authentication Policies were introduced in Windows Server 2012 R2, hence a Domain functional level of Windows Server 2012 R2 or higher is required for implementation.


 


Authentication Policy – special Settings


Require rolling NTLM secret for NTLM authentication


Configuration of this feature was moved to the properties of the domain in Active Directory Administrative Center. When enabled, for users with the “Smart card is required for interactive logon” checkbox set, a new random password will be generated according to the password policy. See https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/whats-new-in-credential-protection#rolling-public-key-only-users-ntlm-secrets for more details.


 


Allow NTLM network authentication when user is restricted to selected devices


We do NOT recommend enabling this feature because with NTLM authentication allowed the capabilities of restricting access through Authentication Policies are reduced. In addition to that, we recommend adding privileged users to the Protected Users security group. This special group was designed to harden privileged accounts and introduces a set of protection mechanisms, one of which is making NTLM authentication impossible for the members of this group. See https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/authentication-policies-and-authentication-policy-silos#about-authentication-policies for more details.


 


Have Breakglass Accounts in place


Break Glass accounts are emergency access accounts used to access critical systems or resources when other authentication mechanisms fail or are unavailable. In Active Directory, Break Glass accounts are used to provide emergency access to Active Directory in case normal T0 Admin accounts do not work anymore, e.g. because of a misconfigured Authentication Policy.


 


Clean Source Principle


The clean source principle requires all security dependencies to be as trustworthy as the object being secured. Implementation of the clean source principle is beyond the scope of this article, but explained in detail at Success criteria for privileged access strategy | Microsoft Learn.


 


Review ACLs on the Root-level of your Domain(s)


The security implications of an account having excessive privileges (e.g. being able to modify permissions at the root-level of the domain are massive. For that reason, before creating the new OU (named “Admin” in the description above), you must ensure that there are no excessive ACLs (Access Control List) configured on the Root-level of the domain. In addition to that, consider breaking inheritance on the OU called “Admin” in our example.