This article is contributed. See the original author and article here.
Azure Defender for IoT is a unified security solution for identifying IoT/OT devices, vulnerabilities, and threats. It enables organizations to secure entire IoT/OT environments, whether there is a need to protect existing IoT/OT devices or build security into new IoT innovations.
Azure Defender for IoT offers agentless network monitoring that can be deployed on physical hardware or virtualized environment and a lightweight micro agent that supports standard IoT operating systems. OT (Operational Technology) is used to monitor Industrial equipment rather than traditional Network IT resources.
Azure Sentinel can be used to integrate with Defender for Security Orchestration, Automation, and Response (SOAR) capabilities enables automated response and prevention using built-in OT-optimized playbooks.
This Blogpost presents two topics to support enterprises and enable a quick start with IoT/OT:
Onboard an agentless Defender for IoT sensor for PoC/Evaluation purpose.
Integration of Defender for IoT with Azure Sentinel for unified security management across IoT/OT landscape.
Prerequisites and Requirements
This capture describes the requirements to set up the environment.
A network switch that supports traffic monitoring via SPAN port.
Create or use an existing Azure IoT Hub service. IoT Hub is required to manage IoT devices and security.
An existing Azure Sentinel deployment for unified security management experience for Defender for IoT alerts.
Install the Defender for IoT Sensor
The installation takes a while and requires several reboots during the installation.
Before you can start the installation, there is a need to download the installation software. The ISO for the installation can be found in Azure Portal > Azure Defender for IoT > Set up a sensor > Purchase an appliance and install software > Download.
For my lab environment, I decided to use a Vmware ESXI server. I created a guest VM with 4 CPU cores, 8 GB of RAM, 128 GB of hard drive, and 2 virtual network cards for the sensor. One virtual card will be later used for the management interface, and the second one for the SPAN port. I prepared the environment for my lab as follow:
For installing the sensor, I attached the downloaded ISO to the sensor guest VM to kick off the installation.
For the initial configuration, select a language.
Select SENSOR-RELEASE-version Office.
Configure the architecture and the network properties.
Use eth0 for the management network (interface) and eth1 for the input interface (SPAN port) and click “y” to accept the configuration.
After few minutes, CyberX and support credentials appear. Copy the passwords for later usage.
Support: The administrative user for user management.
CyberX: The equivalent of root for accessing the appliance.
Select Enter to continue.
Once the installation is finished, you can access the management console via the configured IP address during the installation.
Once the sensor is installed, now it’s time to prepare the sensor as a cloud-connected sensor. In this mode, the sensor would send the alerts to Event Hub to share them with Azure services such as Azure Sentinel.
For the next step, there a need for an activation file. The Activation files contain the instructions for the management mode of the sensor.
To get the activation file, perform the following steps.
From the Azure Portal, navigate to Defender for IoT > Start discovering your network / Onboard sensor.
Define a name for the sensor, choose the subscription, select On the cloud, select an IoT Hub or create one, use a Display name and click to Register.
Now the Activation file is generated and can be downloaded for the next step. Download the file and save it for the next step to activate the sensor in cloud-connected mode.
Activate the agentless Sensor
The following steps are required to activate the sensor and to perform the initial setup.
Log on to the management console from your browser and the CyberX credential, which was pre-defined, including password during the installation.
After sign in from the Activation page, upload the Activation File, which was saved in preview steps, approve the Terms and Conditions and click Activate.
After activation, I would recommend some best practices to follow:
Create a new Admin account for management and only use the CyberX and support account if there is a need for it.
Change the sensor’s name and, if required, the network settings in the network configuration settings.
Validate the Sensor
After logging in to the management console, the sensor can be validated.
I see the SPAN input is functional, and data is streamed from the mirror port.
The sensor also discovered the asset as well as built a network map based on the discovery.
Integrate with Azure Sentinel
As the sensor is operated in a cloud-connected mode, the integration into Azure Sentinel is a one-click experience.
To enable the data connector in Azure Sentinel, open the Azure Portal and navigate to Azure Sentinel > Data connectors and search for the Azure Defender for IoT connector, then click to Open connector page.
And click to connect your Subscription to stream IoT Hub alerts into Azure Sentinel.
In the Next Steps selection, you can enable the Create incidents based on Azure Security Center for IoT alerts analytics rule to create incidents that Azure Sentinel can manage.
Additionally, use the Azure Defender for IoT Alerts workbook to gain insights into your IoT data workloads from Azure IoT Hub managed deployments, monitor alerts across all your IoT Hub deployments, and detect devices at risk act upon potential threats.
With the enabled data connector, you can manage the Defender for IoT incidents in Azure Sentinel. Please check the SecurtityAlert table for all the alert data from Defender for IoT.
SecurityAlert | where ProductName == “Azure Security Center for IoT”
| sort by TimeGenerated
Or from the Azure Sentinel Incident dashboard.
Summary
In this blog post, I covered the deployment of an agentless Defender for IoT sensors and the integration with Azure Sentinel to manage the security incidents.
Stay tuned for other IoT-related content in this channel.
This article is contributed. See the original author and article here.
Hello blog readers
One of recurring questions during my customer engagements on Azure Monitor is: how do I set alert state to either Acknowledged or Closed with no manual intervention?
This question is broader and deeper than it appears. In fact, linked to the pure and simple alert state there are often ITSM processes coming along. State is just an alert property that can have only 1 of the 3 following values at a given time: New, Acknowledged or Closed. Should you want to read more about Azure Monitor alerts (including their states) you can find more information in the official Microsoft documentation at Overview of alerting and notification monitoring in Azure – Azure Monitor | Microsoft Docs.
Hence, when it comes to the state, we also need to consider other actors. In a simple scenario, where have notifications and no ITSM processes, we can automate alert state management using Azure Automation to fire a runbook that sets the alert state on schedules. Differently, on mature customers or high integrated IT environments, where alerts are part of the incident management process(es), we must consider that alert states have to be managed in line with the ITSM integration. The below diagram quickly describes the scenario for alerts lifecycle when the ITSM integration is in place:
Azure Monitor <–> ITSM integration flow
So, provided that you have evaluated the best scenario according to the company’s business needs, the idea shared here is very easy and works very well especially with metrics-based alerts where you have a stateful alert approach.
With log-search based alerts, the situation can become a bit more complex since these alerts are stateless.
Looking at the alerts from Azure Monitor – Alerts blade,
Azure Monitor Alert Dashboard
you may have noticed that among all the columns, we have one called Monitor condition, whose value is sometime set to Fired or Resolved, and one called Signal type.
Let us start with the Signal type one. This one stands for the repository (and hence the type of data we are going to use for the alert: Metrics or Logs) where the data is stored. It is important to understand that because the type of data is what drives the value in the Monitor condition column. This column is showing the status of the object/aspects we created the alert for.
But why it sometime shows as resolved and sometimes not? The answer is exactly in the value reported by the Signal type column. When Signal type is Metrics or Health, it means that we are using data whose certainty is guaranteed 100%. In other words, that type of data will always be produced, collected and stored in Azure, so we can check whether an issue has been resolved or not and set the Monitor condition property value accordingly. This certainty makes the alerts stateful. For more info you can check the Understand how metric alerts work in Azure Monitor documentation at https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-metric-overview
Differently, when it is Log there is no assurance that we either collected or received data. Think about an on-prem environment in which we have several dependencies as part of the trip to Azure Log Analytics. Think about what happen when we lose Internet connectivity or the monitoring agent just stops, or the server is powered off. How can we make sure the issue is resolved if we have no data confirming it? This uncertainty makes the log-based alerts stateless. Should you need more info, you can refer to the Log alerts in Azure Monitor documentation, specifically looking at the State and resolving alerts paragraph.
With all that said we now have a better idea of what to do to set our alert state in both scenarios (Metrics/Health and logs).
Since we proved so far that using Metrics or Health as signal type we always have the correct and up-to-date condition, we can just look at that the MonitorCondition property value and set the alert state to Closed. In that case the simple automation runbook I am suggesting below can help:
<#
.SYNOPSIS
This sample automation runbook is designed to set the metric or health based alerts to Closed.
.DESCRIPTION
This sample automation runbook is designed to set the metric or health based alerts to Closed. It looks for all the alerts in the provided time range and for each,
it will check the value of the MonitorCondition property. Should it be equal to Resolved, we set the state property to Closed.
This runbook requires the Az.AlertsManagement PowerShell module which can be found at https://docs.microsoft.com/en-us/powershell/module/az.alertsmanagement/?view=azps-5.6.0
NOTE: TimeRange parameter only accepts the value reported in the ValidateSet. This is in line with the underlying API requirements that is
documented at https://docs.microsoft.com/en-us/rest/api/monitor/alertsmanagement/alerts/getall#timerange
.PARAMETER TimeRange
Required. The TimeRange on which we query the alerts.
.EXAMPLE
.Close-ResolvedAlerts.ps1 -TimeRange 1d
.NOTES
AUTHOR: Bruno Gabrielli
VERSION: 1.0
LASTEDIT: Dec 08th, 2020
#>
#Parameters
param(
[ValidateSet('1h', '1d', '7d', '30d')]
[string] $TimeRange = '1d'
)
#Inizialiting connection to the AutomationAccount
[String]$connectionName = "AzureRunAsConnection"
try
{
#Get the connection "AzureRunAsConnection "
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
#"Logging in to Azure..."
$nullOut = (Add-AzAccount `
-ServicePrincipal `
-TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint `
-WarningAction:Ignore)
#"Setting context to a specific subscription"
$nullOut = (Set-AzContext -SubscriptionId $servicePrincipalConnection.SubscriptionId -WarningAction:Ignore)
$inactiveAlerts = (Get-AzAlert -MonitorCondition Resolved -State New -TimeRange $TimeRange)
if($inactiveAlerts)
{
foreach($alert in $inactiveAlerts)
{
Write-Output "Setting state to 'Closed' for alert '$($alert.Name)' which had the monitor condition set to '$($alert.MonitorCondition)' and the state set to '$($alert.State)'"
Update-AzAlertState -AlertId $alert.Id -State Closed
}
}
else
{
Write-Output "No inactive (Resolved) alerts in the specified '$($TimeRange)' period."
}
}
catch
{
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
}
else
{
Write-Error -Message $_.Exception
throw $_.Exception
}
}
As opposite to Metrics or Health based alerts, the Log based alerts need to be managed differently. Here we must look first for the MonitorService property value, making sure that it is equal to “Log Analytics”. After that we need to make some assumptions based on the LastModified property value. Based on the log-based alerts nature, we might assume that if an alert has not been changed later than the TimeRange parameter value we provided, we could close it. We will get a new one soon if the corresponding issue has not been resolved in the meantime. Here below you can find another sample runbook for that purpose:
<#
.SYNOPSIS
This sample automation runbook is designed to set the Log Analytics based alerts to Closed.
.DESCRIPTION
This sample automation runbook is designed to set the Log Analytics based alerts to Closed. It looks for all the alerts in the provided time range and for each,
it will check the value of the MonitorService property. Should it be equal to Log Analytics and last modified later than TimeRange, we set the state property to Closed.
This runbook requires the Az.AlertsManagement PowerShell module which can be found at https://docs.microsoft.com/en-us/powershell/module/az.alertsmanagement/?view=azps-5.6.0
NOTE: TimeRange parameter only accepts the value reported in the ValidateSet. This is inline with the underlying API requirements that is
documented at https://docs.microsoft.com/en-us/rest/api/monitor/alertsmanagement/alerts/getall#timerange
.PARAMETER TimeRange
Required. The TimeRange on which we query the alerts.
.EXAMPLE
.Close-ResolvedAlerts.ps1 -TimeRange 1d
.NOTES
AUTHOR: Bruno Gabrielli
VERSION: 1.0
LASTEDIT: Jan 21st, 2021
#>
#Parameters
param(
[ValidateSet('1h', '1d', '7d', '30d')]
[string] $TimeRange = '1d'
)
#Inizialiting connection to the AutomationAccount
[String]$connectionName = "AzureRunAsConnection"
try
{
#Get the connection "AzureRunAsConnection "
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
#"Logging in to Azure..."
$nullOut = (Add-AzAccount `
-ServicePrincipal `
-TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint `
-WarningAction:Ignore)
#"Setting context to a specific subscription"
$nullOut = (Set-AzContext -SubscriptionId $servicePrincipalConnection.SubscriptionId -WarningAction:Ignore)
$inactiveAlerts = (Get-AzAlert -MonitorService 'Log Analytics' -State New -TimeRange $TimeRange)
if($inactiveAlerts)
{
foreach($alert in $inactiveAlerts)
{
if($alert.LastModified -le ((Get-date).add(-$TimeRange)))
{
Write-Output "Setting state to 'Closed' for alert '$($alert.Name)' which had the monitor service equal to $($alert.MonitorService), monitor condition set to '$($alert.MonitorCondition)' and the state set to '$($alert.State)'"
Update-AzAlertState -AlertId $alert.Id -State Closed
}
}
}
else
{
Write-Output "No inactive (Resolved) alerts in the specified '$($TimeRange)' period."
}
}
catch
{
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
}
else
{
Write-Error -Message $_.Exception
throw $_.Exception
}
}
Both sample runbook codes requires the Az.AlertsManagement PowerShell module to be imported into your Automation Account.
With all the ingredients and knowledge, you just have to import the 2 scripts as new runbooks:
Azure Automation Runbooks
and schedule them to run on your preferred interval which can be different from the value you used as TimeRange parameter:
Azure Automation Schedules
Thanks for reading this one till the end,
Bruno.
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
This article is contributed. See the original author and article here.
This is thenext installment of our blog series highlighting Microsoft Learn Student Ambassadors who achieved the Gold milestone and have recently graduated from university. Each blog in the series features a different student and highlights their accomplishments, their experience with the Student Ambassadorscommunity, and what they’re up to now.
Today we meet Ayush Chauhan who is from India and graduated in December from JECRC University located in the city of Jaipur in Rajasthan in India.All the students interviewed so far have been very forthcoming in sharing their history and their experience, but Ayush kicked off the interview with a declarative “I have lots of things to say”, which was terrific!
Responses have been edited for clarity and length.
When you became a Student Ambassador in 2017, did you have specific goals you wanted to reach, such as attaining a particular skill or quality? What were they? Did you achieve them?
I applied on the Student Ambassador website.I was just obsessed with Microsoft from the Windows Lumia age, and I didn’t know what the future was holding for me. My goal was to be able to write code or build software that will help people, or that will impact the developer and different communities. When I was creating a video to submit on the application, it was the first time I got my hands on Node.js& Bot framework. I learned it using Microsoft Docs, and I have never stopped learning since then. So yes, I was able to achieve my goal.
How has being in the Student Ambassador community impacted you in general besides helping you develop additional tech skills?
I landed my first internship in the second year just because of the bot I developed for my Student Ambassador application.
Being in this program, I got to learn from experts, and it has impacted my life because I was winningcompetitions, around 10 hackathons. It gave me huge confidence toothat I can build anything I can think of and go on any stage to represent it. Microsoft has impacted me a lot in three years. It has accelerated my learning and ability to build anything I can imagine.
What were the accomplishments that you’re the proudest of? And please give us details.
I won the 2018 India Capgemini Tech Challenge. I was in my third year at university,and over 3,500 working professionals participated in the Azure category. We had to build a chatbot, so I built a SAAS to help book writers to format or digitalize their writings without needing to wait for a person to write. It was the first time I realized that I can do anything, and age and appearance don’t matter. The only thing that matters is hard work and practice.
I built a dataset of 100 women’s colleges to help with the diversity and inclusion in our events.It created an opportunity to invite 5000+ STEM students to participate in global events and feel included in the tech community.
I was proud of the projects that I built in hackathons whether I won them or not. They involved everything from helping elder people with IoT home automation, to a chatbot for newborn children’s parents that can resolve their queries, and much more.
You graduated a couple few months ago. So what have you been up to since graduation?
After graduation I joined the School of Accelerated Learning, a startup[editor’s note: India’s first-ever hybrid coding bootcamp for millennials looking to build tech-focused careers]. I have been working to build quality and relevant education for the tech world. So, it is exactly what I believe in. It’s exactly how this program has empowered me to do. We don’t believe in theoretical curriculums or traditional classrooms. We believe in getting everyone ready for the future despite them being from diverse backgrounds. We tell them how to build industry-driven products by themselves. We explain them concepts, we try to build their mindset. We do activities that help them grow their innovation and built a tech-enabled environment that nurtures their growth mindset.
And I was working in the open-source community. Every time you go on GitHub and see a repository with a “deploy to Azure button” – I made that button, I redesigned it.
If you could redo things, is there anything you would have done differently while you were a Student Ambassador? Or would you have done things the same?
I don’t think I would try to redo something because what’s happened, happened. My failures made me what I am today.
If you were to describe theStudent Ambassadorscommunity to a student who is interested in joining, what would you say to them?
I’ll say “Hey, do you want to make some cool like-minded friends from all over the world? Do you want to gain knowledge and experience the future of productivity with Microsoft? Do want to have the benefits of Visual Studio Enterprise subscriptions?. Also do you want to learn from industry experts and also get a Microsoft certification? Well, this program has covered all of these benefits in a single package, so you won’t stop learning because of less resources or no exposure.”
What advice would you give to new Student Ambassadors?
Always have the audacity to curiously ask questions. There is always a solution of how you can solve a hard-coded error. For that you need to avoid a know-it-all mindset. Don’t just react on knowledge you’ve heard or seen. Go a step ahead, try to learn it all, implement it all. Whatever you want to build, whatever you see, whatever you want to know or add to your skill set, you should just go and learn it all. Learning is something that doesn’t expire with age.
What is your motto in life, your guiding principle?
I go with the flow always. I never say no to any opportunity even if I know I’ll be failing. I wake up every day knowing there’s something for me to learn.Nothing worth having comes easy. There’s so much love and energy to get up and run again, even after you fall if you love what you do.Also, I take care of burnouts. It’s surprising how something you love so much can hurt you a lot. I take breaks to recover, so I play games and listen to music in that time.
This article is contributed. See the original author and article here.
The API Management is a proxy to the backend APIs, it’s a good practice to implement security mechanism to provide an extra layer of security to avoid unauthorized access to APIs.
Configuring OAuth 2.0 Server in APIM merely enables the Developer Portal’s test console as APIM’s client to acquire a token from Azure Active Directory. In the real world, customer will have a different client app that will need to be configured in AAD to get a valid OAuth token that APIM can validate.
Prerequisites
To follow the steps in this article, you must have:
Azure subscription
Azure API Management
An Azure AD tenant
API Management supports other mechanisms for securing access to APIs, including the following examples:
Subscription keys End-users who need to consume the APIs must include a valid subscription key in HTTP requests when they make calls to those APIs.
Client Certificate The second option is to use Client Certificates. In API Management you can configure to send the client certificates while making the API calls and validate incoming certificate and check certificate properties against desired values using policy expressions.
Restrict caller Ips The third option is Restrict caller Ips – It (allows/denies) calls from specific IP addresses and/or address ranges which is applied in the <ip-filter>Policy.
Securing the Back End API using OAuth2.0 Another option is using OAuth 2.0, Users/services will acquire an access token from an authorization server via different grant methods and send the token in the authorization header. In the inbound policy the token can be validated.
Azure AD OAUTH2.0 authorization in APIM
OAUTH 2.0 is the open standard for access delegation which provides client a secure delegated access to the resources on behalf of the resource owner.
Note: In the real world, you will have a different client app that will need to be configured in AAD to get a valid OAuth token that APIM can validate.
The below diagram depicts different client applications like Web application/SPA, Mobile App and a server process that may need to obtain a token in Non-Interactive mode. So you must create a different App Registration for the respective client application and use them to obtain the token.
In this Diagram we can see the OAUTH flow with API Management in which:
The Developer Portal requests a token from Azure AD using app registration client id and client secret.
In the second step, the user is challenged to prove their identity by supplying User Credentials.
After successful validation, Azure AD issues the access/refresh token.
User makes an API call with the authorization header and the token gets validated by using validate-jwt policy in APIM by Azure AD.
Based on the validation result, the user will receive the response in the developer portal.
Different OAuth Grant Types:
Grant Flow
Description
Use Case
Authorization Code
It is the most used grant type to authorize the Client to access protected data from a Resource Server.
Used by the secure client like a web server.
Implicit
It is intended for user-based clients who can’t keep a client secret because all the application code and storage is easily accessible.
Used by the client that can’t protect a client secret/token, such as a mobile app or single page application.
Client Credentials
This grant type is non interactive way for obtaining an access token outside of the context of a user.
It is suitable for machine-to-machine authentication where a specific user’s permission to access data is not required.
Resource Owner password Credentials
It uses the username and the password credentials of a Resource Owner (user) to authorize and access protected data from a Resource Server.
For logging in with a username and password (only for first-party apps)
High-level steps required to configure OAUTH
To configure Oauth2 with APIM the following needs to be created:
Register an application (backend-app) in Azure AD to represent the protected API resource.
Register another application (client-app) in Azure AD which represent a client that wants to access the protected API resource.
In Azure AD, grant permissions to client(client-app) to access the protected resource (backend-app).
Configure the Developer Console to call the API using OAuth 2.0 user authorization.
Add the validate-jwt policy to validate the OAuth token for every incoming request.
Register an application (backend-app) in Azure AD to represent the API.
To protect an API with Azure AD, first register an application in Azure AD that represents the API. The following steps use the Azure portal to register the application.
Search for Azure Active Directory and select App registrations under Azure Portal to register an application:
Select New registration.
In the Name section, enter a meaningful application name that will be displayed to users of the app.
In the Supported account types section, select an option that suits your scenario.
Leave the Redirect URI section empty.
Select Register to create the application.
On the app Overview page, find the Application (client) ID value and record it for later.
Select Expose an API and set the Application ID URI with the default value. Record this value for later.
Select the Add a scope button to display the Add a scope page. Then create a new scope that’s supported by the API (for example, Files.Read).
Select the Add scope button to create the scope. Repeat this step to add all scopes supported by your API.
When the scopes are created, make a note of them for use in a subsequent step.
Register another application (client-app) in Azure AD to represent a client application that needs to call the API.
Every client application that calls the API needs to be registered as an application in Azure AD. In this example, the client application is the Developer Console in the API Management developer portal.
To register another application in Azure AD to represent the Developer Console:
Follow the steps 1 – 6. mentioned in the previous section for registering backend app.
Once the App registered, On the app Overview page, find the Application (client) ID value and record it for later.
Create a client secret for this application to use in a subsequent step.
From the list of pages for your client app, select Certificates & secrets, and select New client secret.
Under Add a client secret, provide a Description. Choose when the key should expire and select Add. When the secret is created, note the key value for use in a subsequent step.
Authorization Code:
In Authorization code grant type, User is challenged to prove their identity providing user credentials. Upon successful authorization, the token end point is used to obtain an access token.
The obtained token is sent to the resource server and gets validated before sending the secured data to the client application.
Enable OAuth 2.0 in the Developer Console for Authorization Code Grant type
At this point, we have created the applications in Azure AD, and granted proper permissions to allow the client-app to call the backend-app.
In this demo, the Developer Console is the client-app and has a walk through on how to enable OAuth 2.0 user authorization in the Developer Console. Steps mentioned below:
In Azure portal, browse to your API Management instance and Select OAuth 2.0 > Add.
Provide a Display name and Description.
For the Client registration page URL, enter a placeholder value, such as http://localhost.
For Authorization grant types, select Authorization code.
Specify the Authorization endpoint URL and Token endpoint URL. These values can be retrieved from the Endpoints page in your Azure AD tenant.
Browse to the App registrations page again and select Endpoints.
Important
Use either v1 or v2 endpoints. However, depending on which version you choose, the below step will be different. We recommend using v2 endpoints.
If you use v1 endpoints, add a body parameter named resource. For the value of this parameter, use Application ID of the back-end app.
If you use v2 endpoints, use the scope you created for the backend-app in the Default scope field. Also, make sure to set the value for the accessTokenAcceptedVersion property to 2 in your application manifest in Azure AD Client APP and Backend app.
Next, specify the client credentials. These are the credentials for the client-app.
For Client ID, use the Application ID of the client-app.
For Client secret, use the key you created for the client-app earlier.
Immediately following the client secret is the redirect_urls
Go back to your client-app registration in Azure Active Directory under Authentication.
.paste the redirect_url under Redirect URI, and check the issuer tokens then click on Configure button to save.
Now that you have configured an OAuth 2.0 authorization server, the Developer Console can obtain access tokens from Azure AD.
The next step is to enable OAuth 2.0 user authorization for your API. This enables the Developer Console to know that it needs to obtain an access token on behalf of the user, before making calls to your API.
Go to APIs menu under the APIM
Select the API you want to protect and Go to Settings.
Under Security, choose OAuth 2.0, select the OAuth 2.0 server you configured earlier and select save.
Calling the API from the Developer Portal:
Now that the OAuth 2.0 user authorization is enabled on your API, the Developer Console will obtain an access token on behalf of the user, before calling the API.
Copy the developer portal url from the overview blade of apim
Browse to any operation under the API in the developer portal and select Try it. This brings you to the Developer Console.
Note a new item in the Authorization section, corresponding to the authorization server you just added.
Select Authorization code from the authorization drop-down list, and you are prompted to sign in to the Azure AD tenant. If you are already signed in with the account, you might not be prompted.
After successful sign-in, an Authorization header is added to the request, with an access token from Azure AD. The following is a sample token (Base64 encoded):
Select Send to call the API successfully with 200 ok response.
Validate-jwt policy to pre-authorize requests with AD token:
Why JWT VALIDATE TOKEN?
At this point we can call the APIs with the obtained bearer token.
However, what if someone calls your API without a token or with an invalid token? For example, try to call the API without the Authorization header, the call will still go through.
This is because the API Management does not validate the access token, It simply passes the Authorization header to the back-end API.
To pre-Authorize requests, we can use <validate-jwt> Policy by validating the access tokens of each incoming request. If a request does not have a valid token, API Management blocks it.
We will now configure the Validate JWT policy to pre-authorize requests in API Management, by validating the access tokens of each incoming request. If a request does not have a valid token, API Management blocks it.
Browser to the APIs from the left menu of APIM
Click on “ALL APIS” and open the inbound policy to add the validate-jwt policy(It checks the audience claim in an access token and returns an error message if the token is not valid.) and save it.
Go back to the developer portal and send the api with invalid token.
You would observe the 401 unauthorized.
Modify the token from authorization header to the valid token and send the api again to observe the 200-ok response.
Understanding <validate-jwt> Policy
In this section, we will be focusing on understanding how <validate-jwt> policy works (the image in the right side is the decoded JWT Token)
The validate-jwt policy supports the validation of JWT tokens from the security viewpoint, It validates a JWT (JSON Web Token) passed via the HTTP Authorization header If the validation fails, a 401 code is returned.
The policy requires an openid-config endpoint to be specified via an openid-config element. API Management expects to browse this endpoint when evaluating the policy as it has information which is used internally to validate the token. Please Note : OpenID config URL differs for the v1 and v2 endpoints.
The required-claims section contains a list of claims expected to be present on the token for it to be considered valid. The specified claim value in the policy must be present in the token for validation to succeed.
The claim value should be the Application ID of the Registered Azure AD Backend-APP.
The following diagram shows what the entire implicit sign-in flow looks like.
As mentioned, Implicit grant type is more suitable for the single page applications. In this grant type, The user is requested to signin by providing the user credentials
Once the credentials are validated the token is returned directly from the authorization endpoint instead of the token endpoint.
The token are short lived, and a fresh token will be obtained through a hidden request as user is already signed in.
NOTE : To successfully request an ID token and/or an access token, the app registration in the Azure portal – App registrations page must have the corresponding implicit grant flow enabled, by selecting ID tokens and access tokens in the Implicit grant and hybrid flows section.
Implicit Flow – DEMO
The configuration for the implicit grant flow is similar to the authorization code, we would just need to change the Authorization Grant Type to “Implict Flow” in the OAuth2.0 tab in APIM as shown below.
After the OAuth 2.0 server configuration, The next step is to enable OAuth 2.0 user authorization for your API under APIs Blade :
Now that the OAuth 2.0 user authorization is enabled on your API, we can test the API operation in the Developer Portal for the Authorization type : “Implict”.
Once after choosing the Authorization type as Implicit, you should be prompted to sign into the Azure AD tenant. After successful sign-in, an Authorization header is added to the request, with an access token from Azure AD and APIs should successfully return the 200-ok response:
Client Credentials flow
The entire client credentials flow looks like the following diagram.
In the client credentials flow, permissions are granted directly to the application itself by an administrator.
Token endpoint is used to obtain a token using client ID and Client secret, the resource server receives the server and validates it before sending to the client.
Client Credentials – Demo
In Client Credential flow, The OAuth2.0 configuration in APIM should have Authorization Grant Type as “Client Credentials”
Specify the Authorization endpoint URL and Token endpoint URL with the tenant ID
The value passed for the scope parameter in this request should be (application ID URI) of the backend app, affixed with the .default suffix : ”API://<Backend-APP ID>/.default”
Now that you have configured an OAuth 2.0 authorization server, The next step is to enable OAuth 2.0 user authorization for your API.
Now that the OAuth 2.0 user authorization is enabled on your API, we can test the API operation in the Developer Portal for the Authorization type : “Client Credentials”.
Once after choosing the Authorization type as Client Credentials in the Developer Portal,
The sign in would happen internally with client secret and client ID without the user credentials.
After successful sign-in, an Authorization header is added to the request, with an access token from Azure AD.
The Resource Owner Password Credential (ROPC) flow allows an application to sign in users by directly handling their password.
The ROPC flow is a single request: it sends the client identification and user’s credentials to the Identity Provided, and then receives tokens in return.
The client must request the user’s email address and password before doing so. Immediately after a successful request, the client should securely release the user’s credentials from memory.
Resource Owner Password Credentials – Demo
Disclaimer: The new developer portal currently does not support the ROPC type and being worked upon by the Engineering team.
We will be covering the Demo in Legacy Developer Portal on ROPC as new portal does not support this type yet.
Please note that legacy portal is going to get expired in 2023.
The OAuth2.0 server configuration would be similar to the other grant types, we would need select the Authorization grant types as Resource Owner Password :
You can also specify the Ad User Credentials in the Resource owner password credentials section:
Please note that it’s not a recommended flow as it requires a very high degree of trust in the application and carries risks which are not present in other grant types.
Now that you have configured an OAuth 2.0 authorization server, the next step is to enable OAuth 2.0 user authorization for your API.
Now that the OAuth 2.0 user authorization is enabled on your API, we will be browsing to the legacy developer portal and maneuver to the API operation
Select Resource Owner Password from the authorization drop-down list
You will get a popup to pass the credentials with the option to “use test user” if you check this option it will be allowing the portal to sign in the user by directly handling their password added during the Oauth2.0 configuration and generate the token after clicking on Authorize button :
Another option is to uncheck the “test user” and Add the username and password to generate the token for different AD User and hit the authorize button
The access token would be added using the credentials supplied:
Select Send to call the API successfully.
Please note that the validate jwt policy should be configured for preauthorizing the request for Resource owner password credential flow also.
Things to remember
The portal needs to be republished after API Management service configuration changes when updating the identity providers settings.
Common issues when OAuth2.0 is integrated with API Management:
Problem faced while obtaining a token with Client Credential Grant Type:
Error Snapshot:
Solution:
This error indicated that scope api://b29e6a33-9xxxxxxxxx/Files.Read is invalid.
As client_credentials flow requires application permission to work, but you may be passing the scope as Files.Read which is a delegated permission(user permission) and hence it rejected the scope.
To make it work, we would need to use default application scope as “api://backendappID/.default”
II. Receiving “401 Unauthorized” response
Solution:
You may be observing 401 unauthorized response returned by validate-jwt policy, its is recommended to look at the aud claims in the passed token and validate-jwt policy.
You can decode the token at https://jwt.io/ and reverify it with the validate-jwt policy used in inbound section: For example:
The Audience in the decoded token payload should match to the claim section of the validate-jwt policy:
<claim name=”aud”>
<value>api://b293-9f6b-4165-xxxxxxxxxxx</value>
</claim>
Validate-JWT policy fails with IDX10511: Signature validation failed:
When we go to test the API and provide a JWT token in the Authorization header the policy may fail with the following error:
This uri will point to a set of certificates used to sign and validate the jwt’s. You may find that the keyId (in this sample “CtTuhMJmD5M7DLdzD2v2x3QKSRY“) does exist there.
Something like this:
{
“keys”: [{
“kty”: “RSA”,
“use”: “sig”,
“kid”: “CtTuhMJmD5M7DLdzD2v2x3QKSRY“,
“x5t”: “CtTuhMJmD5M7DLdzD2v2x3QKSRY”,
“n”: “18uZ3P3IgOySln……”,
“e”: “AQAB”,
“x5c”: [“MII…..”]
So it seems that it should be able to validate the signature.
If you look at the decoded jwt you may see something like this:
This requires extra checking that validate-jwt does not do. Getting a token for the Graph api and Sharepoint may emit a nonce property. A token used to make calls to the Azure management api, however, will not have the nonce property.
The ‘nonce’ is a mechanism, that allows the receiver to determine if the token was forwarded. The signature is over the transformed nonce and requires special processing, so if you try and validate it directly, the signature validation will fail.
The validate jwt policy is not meant to validate tokens targeted for the Graph api or Sharepoint. The best thing to do here is either remove the validate jwt policy and let the backend service validate it or use a token targeted for a different audience.
Validate-JWT policy fails with IDX10205: Issuer validation failed
Here is an example configuration a user might have added to their policy:
<validate-jwt header-name=”Authorization” failed-validation-httpcode=”401″ failed-validation-error-message=”Unauthorized. Access token is missing or invalid.”>
This error message gets thrown when the Issuer (“iss”) claim in the JWT token does not match the trusted issuer in the policy configuration.
Azure Active Directory offers two versions of the token endpoint, to support two different implementations. AAD also exposes two different metadata documents to describe its endpoints. The OpenID Config files contains details about the AAD tenant endpoints and links to its signing key that APIM will use to verify the signature of the token. Here are the details of those two endpoints and documents (for the MSFT AAD tenant):
The error usually occurs because the user is using a mix between V1 and V2. So they request a token from V1 endpoint but configured <openid-config> setting pointing to V2 endpoint, or vice versa.
To resolve this issue you just need to make sure the <validate-jwt> policy is loading up the matching openid-config file to match the token. The easiest way is to just toggle the open-id config url within the policy and then it will move beyond this part of the validation logic.
<validate-jwt header-name=”Authorization” failed-validation-httpcode=”401″ failed-validation-error-message=”Unauthorized. Access token is missing or invalid.”>
Just switch out the openid-config url between the two formats, replace {tenant-id-guid} with the Azure AD Tenant ID which you can collect from the Azure AD Overview tab within the Azure Portal
This article is contributed. See the original author and article here.
In the State of CSS 2020 survey, the Tailwind CSS becomes the number 1 CSS Framework in terms of Satisfaction and Interest in the last 2 years. It also gets the awards for The Most Adopted Technology. It seems a lot of developers like this framework. Based on my experience, this framework can help us rapidly build UI by reducing complexity when styling the UI.
State of CSS 2020 Survey — CSS Frameworks result
In this article, I will share my setup to use the Tailwind CSS in a SharePoint Framework (SPFx) project.
Prepare the SPFx Project
Prepare your SPFx project. I use a newly generated SPFx project (v1.11) but you can also use your existing SPFx project.
Install Modules
Install all modules needed by executing the command below:
Initialize Tailwind CSS by executing the command below:
npx tailwind init -p –full
The command will create the tailwind.config.js in the project’s base directory. The file contains the configurations, such as colors, themes, media queries, and so on.
The command will also create the postcss.config.js file. We need PostCSS because we will use Tailwind CSS as a PostCSS plugin.
Inject Tailwind CSS Components and Utilities
We need to create a CSS file that will be used to import Tailwind CSS styles.
Create an assets folder in the project’s base directory
The code will add the tailwindcss subtask to the SPFx Gulp Build task. It will also purge (remove unused styles) the Tailwind CSS for build with ship flag:
gulp build –ship
or
gulp bundle –ship
Add Reference to The Generated Tailwind CSS
We need to add reference the generated Tailwind CSS by adding the import code in your main .ts webpart file:
import ‘../../../assets/dist/tailwind.css’;
That’s it!
Now you can use Tailwind CSS utilities in your SPFx project.
Result
You might be familiar with the below result except it’s not using styles from the 74-lines scss/css file anymore.
Below is the updated React component that’s using the Tailwind CSS utility classes for styling.
Recent Comments