Analysing Web Shell Attacks with Azure Defender data in Azure Sentinel

Analysing Web Shell Attacks with Azure Defender data in Azure Sentinel

This article is contributed. See the original author and article here.

Tom McElroy, Rob Mead – Microsoft Threat Intelligence Center

Thanks to Stefan Sellmer, Elia Florio, Ram Pliskin, Dotan Patrich & Yossi Weizman for making this blog possible.

 

On the 22nd September 2020 as part of IGNITE, we released a video demo showing how to use Microsoft 365 Defender and Azure Defender (AD) in combination with Azure Sentinel to investigate a web shell attack. Part of the attack took place against a web application hosted on Azure App Services, where the attacker gained access by using legitimate credentials.

 

This blog will use the same demo environment as the IGNITE demo, and will expand further on aspects related to Azure Defender, App Services and Azure Sentinel. Watching the IGNITE demo is not required to gain value from this blog.

 

This blog will cover:

  • Enabling App Services diagnostic logging
  • Finding App Services diagnostic logging in Azure Sentinel
  • Enabling Azure Defender alerts in Azure Security Center
  • Finding Azure Defender alerts in Azure Sentinel
  • Expanding Azure Defender alerts with App Services diagnostic logging in Azure Sentinel using Kusto queries

 

Enabling App Services Diagnostic Logging

App Services diagnostic logging is primarily used to debug web applications; when diagnostic logging is enabled it provides up to 8 different log types depending on the host system. A complete list of log types and supported platforms can be found in a table toward the end of this page. Enabling diagnostic logging will incur additional storage costs.

 

Enabling this is simple, the animation below shows how to enable diagnostic logging for an App Service called “contoso-digital”. There are a few different locations for diagnostic logs to output to, in the video the “Send to Log Analytics” option is chosen, by selecting the Log Analytics instance where Azure Sentinel resides the data will be available through Azure Sentinel. With diagnostic logs enabled and configured, data will start flowing into Log Analytics and be available within Azure Sentinel. It may take up to 30 minutes for the first logs to appear.

 

This process will need to be completed for each App Service that you want to monitor.

Enabling App Services diagnostic loggingEnabling App Services diagnostic logging

 

Finding App Services Logs in Azure Sentinel

App Services logs can be found within the Log Management category of Azure Sentinel, each App Service log is stored in its own table prefixed with AppService.

 

In the image below, the tables for the App Service logs are highlighted in red. This blog will use AppServiceHTTPLogs which stores HTTP logs and AppServiceAuditLogs which stores authentication events for services such as FTP to expand ASC alerts, both logs are available on Windows or Linux hosts.

App Services logs in Azure SentinelApp Services logs in Azure Sentinel

 

 

Querying AppServiceHTTPLogs provides information about requests that have been received by the server, with some basic information about the response, including the size and the HTTP status code returned. The image below shows some of the more useful fields for security analytics.

App Services HTTP logs example outputApp Services HTTP logs example output

 

Enabling Azure Defender alert in Azure Security Center

Azure Defender enables hybrid security management and threat protection. Azure Defender can continuously assess your resources for potential security breaches. In the context of the IGNITE demo and this blog, Azure Defender alerts are triggered when a malicious attacker uploads a web shell to the web app over FTP, the attacker uses legitimate credentials that have been stolen from the network. Another alert is triggered when the attacker spear phishes a user using a malicious script hosted on the compromised web app, this detection is possible due security alerts from Office 365 Defender being shared with Azure Defender.

 

To receive alerts: You must enable coverage of App Services, the below animation shows how to configure this.

Enabling App Services coverageEnabling App Services coverage

 

 

With coverage enabled, Azure Defender will begin monitoring your App Services for potentially malicious activity and raise these alerts inside the Azure Security Center (ASC) dashboard.

 

During the IGNITE demo the attacker used compromised credentials to upload a web shell and then staged a malicious document for use in a spear phishing campaign. Azure Defender has two correlation detections, “PHP File in Upload folder” and “Phishing content hosted on Azure webApps”, which will find this activity. These detections are part of Azure Defender’s Cloud Smart Alert, a full list of security alerts that may be generated by Azure Defender can be found here.

 

Azure Defender alerts can be accessed from the Azure Security Center tab within Azure portal; once loaded, the overview screen will load as seen in the below image.

 

Security Center overviewSecurity Center overview

 

 

Azure Security Center contains information on a few different things, the focus of this blog is on Threat protection, shown at the bottom of the above image. Threat protection is where alerts that have been generated by Azure Defender are surfaced.

 

Clicking into threat protection will load current alerts. In this example, two alerts are shown; one which has detected a suspicious PHP file and another that has detected phishing activity.

ASC Security Alerts overviewASC Security Alerts overview

 

PHP File in Upload Folder Alert

This detection will trigger when a PHP file is uploaded to a common uploads folder, this will generally happen when an arbitrary file upload vulnerability has been used. Azure Defender correlation rules monitor web activity, if a request is observed to a PHP file within a common upload folder and alert is generated.

 

The image below shows the Azure Defender alert within Azure Security Center. The alert details where the suspicious file is on the app service, the IP address and user agent that was used to access the file, alongside information covering when the alert was triggered and a brief description of what the alert detects.

 

PHP File in upload folder alertPHP File in upload folder alert

 

Phishing content host on Azure webApps Alert

This detection uses correlation between Microsoft Defender for Office 365 and Azure Defender to detect suspicious activity and raise an alert. When a phishing attack takes place, Microsoft Defender for Office 365 will scan included links for potentially malicious activity. If a detection is made and the URL is hosted using Azure App Services then Microsoft Defender for Office 365 will share alert information with the owner of the App Service that has been utilised.

 

In the image below you can see the URL that was used in a phishing attack and details about when the activity occurred.

Phishing content hosted on Azure webapps alertPhishing content hosted on Azure webapps alert

 

Finding ASC alerts in Azure Sentinel

Alert information may not automatically flow into Azure Sentinel from Azure Defender. To allow alerts to flow into Azure Sentinel you must first enable the Azure Security Center connector. The below animation shows how to enable this connector in Azure Sentinel.

 

Enabling the Azure Sentinel connectorEnabling the Azure Sentinel connector

 

With the connector enabled, alert information will begin to flow into Azure Sentinel and can be found in the SecurityAlert table. The Kusto query below will collect Azure Defender alerts from Azure Security Center.

 

SecurityAlert
| where TimeGenerated > ago(30d)
| where ProviderName == "Azure Security Center"

 

 

query1.PNG

 

Expanding Azure Defender Alerts with App Services logging in Azure Sentinel

With App Services Diagnostic logging enabled, Azure Defender configured to monitor App Services for threats, and the Azure Security Center connector enabled in Azure Sentinel, the correct data is in Azure Sentinel to expand on security alerts.

 

PHP File in Upload Folder Expansion Queries

The Azure Defender alert provides details about the suspicious file, the Azure resource ID for the impacted app service, the IP address of the suspected attacker and the time that the alert was generated. This information has been passed through to Azure Sentinel. In this scenario the user interacting with the file in the alert is likely the attacker, so the following expansion queries will focus on extracting more about what the attack has accessed.

 

Azure Security Center passes Azure Defender information in a JSON object, this will need to be parsed to extract the information from the alert. The Kusto query below will collect the security alerts and parse out the entities provided.

 

let timeRange = 30d;
SecurityAlert
| where TimeGenerated > ago(timeRange)
//Collect ASC alerts for PHP file in upload folder
| where ProviderName == "Azure Security Center"
| where AlertName == "PHP file in upload folder"
//Parse the Alert attack entities
| extend AtkEntities = parse_json(ExtendedProperties)
| extend Entities = parse_json(Entities)
//The shell location
| extend AlertPage = iff(AtkEntities['Sample URIs'] != "", tostring(AtkEntities['Sample URIs']), tostring(Entities[0]['Url']))
//The attacker IP and User Agent
| extend AlertIP = AtkEntities['Sample Source IP Addresses']
| extend AlertUA = AtkEntities['Sample User Agents']
| project AlertName, AlertTimeGenerated=TimeGenerated, StartTime, EndTime, AlertPage, AlertIP, AlertUA, ResourceId

 

Executing this query will provide output like below.

query2.PNG

 

It is now possible to establish the IP addresses that the attacker is using. The Azure Defender alert provides the first IP observed, however it is possible that the attacker is using more than a single IP address. With App Services diagnostic logging enabled a basic join on the AppServicesHTTPLogs allows for summarisation of IP addresses that have accessed the web shell. The below Kusto query expands the initial query to do this.

 

let timeRange = 30d;
SecurityAlert
| where TimeGenerated > ago(timeRange)
//Collect ASC alerts for PHP file in upload folder
| where ProviderName == "Azure Security Center"
| where AlertName == "PHP file in upload folder"
//Parse the Alert attack entities
| extend AtkEntities = parse_json(ExtendedProperties)
| extend Entities = parse_json(Entities)
//The shell location
| extend AlertPage = iff(AtkEntities['Sample URIs'] != "", tostring(AtkEntities['Sample URIs']), tostring(Entities[0]['Url']))
//The attacker IP and User Agent
| extend AlertIP = AtkEntities['Sample Source IP Addresses']
| extend AlertUA = AtkEntities['Sample User Agents']
| project AlertName, AlertTimeGenerated=TimeGenerated, StartTime, EndTime, AlertPage, AlertIP, AlertUA, ResourceId=tolower(ResourceId)
| extend ResourceId = replace(@"s", "", ResourceId)
//Join to the web logs for app services
| join (
AppServiceHTTPLogs
| where TimeGenerated > ago(timeRange)
| extend ResourceId=tolower(_ResourceId)
//Defeat scanning
| where ScStatus == 200
| where CsMethod == "POST"
| summarize make_set(TimeGenerated), make_set(UserAgent), Visits=count() by CIp, CsUriStem, ResourceId
) on $left.AlertPage == $right.CsUriStem, ResourceId
| project PotentialAttackerIP=CIp, UserAgents=set_UserAgent, AccessTimes=set_TimeGenerated, AlertPage, Visits
| order by Visits

 

The output of this query will show other IP addresses that have also accessed the web shell script. Web shells are often scanned for by other attackers or security researchers, especially if the web shell is available publicly and the name is left as default.

 

Most web shells utilise HTTP POST requests for authentication or to receive commands from the attacker; to reduce the noise produced by scanning, the query is limited to only find connections that received a successful response code (200) and were made using a HTTP POST request. The image below shows that in the demo environment the attacker has only used a single IP address.

query3.PNG

 

Now that the attacker is identified and any additional IP addresses the attacker has used are known, an additional expansion query can be used to determine how the file was uploaded to the server.

 

The Kusto query below will collect information from AppServicesHTTPLogs and AppServiceAudit logs to determine if an FTP upload took place. App Services HTTP Logs are used to determine the first time the attacker accessed the web shell; the time provided in the Azure Defender alert is when the detection was made, and not necessarily when the web shell was first accessed. The query then uses a time window join against the App Service Audit logs to enable detection of an upload over FTP within 2 days either side of the first access to the web shell; this can be configured by changing the lookup window.

 

let lookupWindow = 4d;
let lookupBin = lookupWindow / 2.0;
let timeRange = 30d;
SecurityAlert
| where TimeGenerated > ago(timeRange)
//Collect ASC alerts for PHP file in upload folder
| where ProviderName == "Azure Security Center"
| where AlertName == "PHP file in upload folder"
//Parse the Alert attack entities
| extend AtkEntities = parse_json(ExtendedProperties)
| extend Entities = parse_json(Entities)
//The shell location
| extend AlertPage = iff(AtkEntities['Sample URIs'] != "", tostring(AtkEntities['Sample URIs']), tostring(Entities[0]['Url']))
//The attacker IP and User Agent
| extend AlertIP = AtkEntities['Sample Source IP Addresses']
| extend AlertUA = AtkEntities['Sample User Agents']
| project AlertName, AlertTimeGenerated=TimeGenerated, StartTime, EndTime, AlertPage, AlertIP, AlertUA, ResourceId=tolower(ResourceId)
| extend ResourceId = replace(@"s", "", ResourceId)
//Join to the web logs for app services to get the first access time
| join kind = leftouter (
AppServiceHTTPLogs
| where TimeGenerated > ago(timeRange)
| summarize make_list(CIp), make_list(TimeGenerated), make_list(UserAgent) by CsUriStem, CsHost, ResourceId=tolower(_ResourceId)
) on $left.AlertPage == $right.CsUriStem, ResourceId
| project AlertTime=AlertTimeGenerated, AlertName, AlertDomain=CsHost, AlertPage, AttackerIP=AlertIP, AttackerUA=AlertUA, ResourceId, list_CIp, list_UserAgent, list_TimeGenerated, HostCustomEntity=ResourceId, CsHost, CsUriStem
| mv-expand list_TimeGenerated, list_CIp, list_UserAgent
| extend TimeKey=bin(todatetime(list_TimeGenerated), lookupBin) | extend Start=list_TimeGenerated
//Order by time and then get the top result, this is the first access time
| order by TimeKey asc
| take 1
| join kind=inner (
//Now collect informtion from app services audit logs
AppServiceAuditLogs
| where TimeGenerated > ago(timeRange)
//Limit to the FTP protocol, this can be commented out to find AAD logins
| where Protocol == "FTP"
| extend ResourceId = tolower(_ResourceId)
| extend TimeKey = range(bin(TimeGenerated-lookupWindow, lookupBin), bin(TimeGenerated, lookupBin), lookupBin) | extend End=TimeGenerated
| mv-expand TimeKey to typeof(datetime)
) on TimeKey, ResourceId
| where End < AlertTime
| extend loginUser = pack(User, UserAddress)
| summarize LoginEvent=make_bag(loginUser) by AlertName, LoginTime=End, AlertHost=CsHost, AlertFile=CsUriStem, Protocol
| project-reorder Protocol, LoginTime, LoginEvent, AlertName, AlertHost, AlertFile
| order by LoginTime asc

 

Successful execution of the query will output results like those shown in the image below. The LoginEvent object provides the username and the IP address that was used by the attacker, in the demo environment the attacker IP used matches the IP in the alert and the App Service HTTP logs and has connected to the FTP server several times.

query4.PNG

 

With this additional information it is possible to block any additional IP addresses the attacker has been using.

 

A final step that can be taken is to further expand the alert with information from AzureActivity. This table in Azure Sentinel contains entries from the Azure Activity log to provide insight into any subscription-level or management group level events that have occurred in Azure. Now the potential attacker IP addresses are known, the below Kusto query can be executed against Azure Activity to determine if any additional administrative actions have been taken.

 

let attackerIPs = dynamic(["xxx.xxx.xxx.xxx"]);
let timeRange = 30d;
AzureActivity
| where TimeGenerated > ago(timeRange)
| where CategoryValue =~ "Administrative"
| where CallerIpAddress has_any(attackerIPs)
| project TimeGenerated, CategoryValue, AttackerIP=CallerIpAddress, CompromisedAccount=Caller, AttackedResource=ResourceId, OperationName

 

If an Azure account has been compromised this query will surface results like those shown in the image below. The account owned by “GemmaG” has been accessed using the attackers IP address and should be disabled until the password can be changed and security verified.

query5.PNG

 

Phishing content host on Azure Web Apps Expansion Queries

This Azure Defender alert provides information relating to a file on the server that has been used in a phishing campaign, it is likely that most users accessing the file are victims of a phishing attack. As with the previous alert, Azure Security Center provides Azure Defender alert information in a JSON object that needs parsing, the Kusto query below can be used to parse out the fields required for further expansion.

 

let timeRange = 30d;
SecurityAlert
| where TimeGenerated > ago(timeRange)
| where ProviderName == "Azure Security Center"
| where AlertName == "Phishing content hosted on Azure webApps"
| extend atkentities = parse_json(ExtendedProperties)
//The alert has the domain and the path concatenated, this will separate the path and domain for further queries
| extend phishingPageLocation = tostring(atkentities['URL'])
| extend Path = extract(@"(?:[.][a-z]{2,4}(?:[.][a-z]{2,4})?(/.+[.][a-z]{1,4}))(?:?|$)", 1, phishingPageLocation)
| extend Domain = extract(@"(^.+[.][a-z]{2,4}(?:[.][a-z]{2,4})?)/", 1, phishingPageLocation)
| extend Domain = replace(@"https?://", "", Domain)
| extend ResourceId = tolower(replace(@" ", "", ResourceId))
| project Domain, Path, ResourceId, AlertName, TimeGenerated

 

Below is example output for the query.

query6.PNG

 

The Path column provides the URL path of the file that was detected in a phishing campaign; combining this with the ResourceId a join can be performed against AppServicesHTTPLogs, this provides insight into the clients that have visited the phishing page.

 

let timeRange = 30d;
SecurityAlert
| where TimeGenerated > ago(timeRange)
| where ProviderName == "Azure Security Center"
| where AlertName == "Phishing content hosted on Azure webApps"
| extend atkentities = parse_json(ExtendedProperties)
//The alert has the domain and the path concatenated, this will separate the path and domain for further queries
| extend phishingPageLocation = tostring(atkentities['URL'])
| extend URI = extract(@"(?:[.][a-z]{2,4}(?:[.][a-z]{2,4})?(/.+[.][a-z]{1,4}))(?:?|$)", 1, phishingPageLocation)
| extend Domain = extract(@"(^.+[.][a-z]{2,4}(?:[.][a-z]{2,4})?)/", 1, phishingPageLocation)
| extend Domain = replace(@"https?://", "", Domain)
| extend ResourceId = tolower(replace(@" ", "", ResourceId))
| project Domain, URI, ResourceId, AlertName, TimeGenerated
//Join with app service HTTP logs
| join (
AppServiceHTTPLogs
| extend ResourceId = tolower(_ResourceId)
| extend URI=CsUriStem
) on ResourceId, URI

 

The above query can now be expanded to identify different types of malicious activity. Links in phishing emails can perform a range of malicious actions. The user may be redirected to another web server for onward exploitation, the user may be presented with a phishing page to collect credentials or the actor may attempt to deliver a payload to the user.

 

When users are redirected to another server or resource it is common to see HTTP status codes used, these redirects use a 3xx HTTP status code. The most common status codes for this are: 301 resource moved permanently, and 302 resource moved temporarily. The line of Kusto below can be added to the previous Kusto query to identify these status codes.

 

| where ScStatus between(300 .. 399)

 

If the actor is attempting to collect credentials from the user then it is likely they are using a Phishing Kit. Most phishing kits use the HTTP POST method to send credentials from the user to the server, the following line of Kusto can be added to the query to identify POST requests which may indicate collection of credentials.

 

| where CsMethod == "POST"

 

In the demo scenario the actor is not attempting to redirect the user or use a phishing kit to collect credentials, it is likely that the actor is using the page to deploy a payload to the user.

 

Most payload delivery systems will contain logic to profile the potential victim to determine if they can be exploited, it is also common for actors to use allow lists and block lists to prevent security researchers collecting their payloads while ensuring they are delivered to unsuspecting victims.

 

Running the query above to extract alerts and perform a join on HTTP data shows the following results.

query7.PNG

 

The users that have accessed the page have all received a 200 status code, indicating that the request was successfully responded to by the server. Looking at the data length of the response, stored in the column entitled ScBytes, two distinct byte size ranges can be seen. In this example scenario the actor is using basic allow listing to only deploy the payload to a certain user, with the payload being a 32MB document file. With information about the payload file size it is possible to expand the query to extract only entries where the payload was deployed.

 

let timeRange = 30d;
let min_payloadsize = 30000;
let max_payloadsize = 35000;
SecurityAlert
| where TimeGenerated > ago(timeRange)
| where ProviderName == "Azure Security Center"
| where AlertName == "Phishing content hosted on Azure webApps"
| extend atkentities = parse_json(ExtendedProperties)
| extend phishingPageLocation = tostring(atkentities['URL'])
| extend URI = extract(@"(?:[.][a-z]{2,4}(?:[.][a-z]{2,4})?(/.+[.][a-z]{1,4}))(?:?|$)", 1, phishingPageLocation)
| extend Domain = extract(@"(^.+[.][a-z]{2,4}(?:[.][a-z]{2,4})?)/", 1, phishingPageLocation)
| extend Domain = replace(@"https?://", "", Domain)
| extend ResourceId = tolower(replace(@" ", "", ResourceId))
| project Domain, URI, ResourceId, AlertName, TimeGenerated
| join (
AppServiceHTTPLogs
| where ScStatus == 200
| summarize make_list(CIp), make_list(TimeGenerated), make_list(UserAgent), make_list(ScBytes) by _ResourceId, CsUriStem
| project list_CIp, list_TimeGenerated, list_UserAgent, list_ScBytes, ResourceId = tolower(_ResourceId), URI=CsUriStem
) on ResourceId, URI
| project AlertName, AlertTimeGenerated=TimeGenerated, Domain, URI, ResourceId, list_CIp, list_TimeGenerated, list_UserAgent, list_ScBytes
| mv-expand list_CIp to typeof(string), list_TimeGenerated to typeof(string), list_UserAgent to typeof(string), list_ScBytes to typeof(int)
| project-rename VictimIP=list_CIp, VictimVisitTime=list_TimeGenerated, VictimUserAgent=list_UserAgent
| order by VictimVisitTime desc
| extend PayloadDelivered = iff(list_ScBytes > min_payloadsize and list_ScBytes < max_payloadsize, 1, 0)
| where PayloadDelivered == 1
| extend visitDetail = pack(VictimVisitTime, list_ScBytes)
| summarize VictimVisits=make_bag(visitDetail), make_set(VictimUserAgent) by AlertTimeGenerated, Domain, URI, ResourceId, AlertName, VictimIP

 

The results of this query are below.

query8.PNG

 

As seen in the above image, the query provides the IP address of the victim, their user agent and the times that the phishing page was visited. This information can be used to identify potentially compromised client machines both within the organisation that owns the App Service and within external organisations that may have been phished by the actor abusing the legitimate domain.

 

Ingesting data from App Services diagnostic logging and combining it with Azure Defender alerts in Azure Sentinel has provided the ability to expand a single alert to identify both the potential attacker and client machines that may be victims.

 

Ingesting alerts from Azure Defender, and additional diagnostic data from Azure App Services, provided a method to enrich and investigate alerts in Azure Sentinel using data that otherwise would not have been available. Being able to merge logging from multiple Microsoft Security products enables a richer understanding of how an attack has unfolded.

 

The benefit of exposing security log data to the wider Microsoft Security ecosystem was further seen when alerts from Microsoft Defender Officer 365 alerts were seamlessly shared with Azure Defender to generate alerts based on an ongoing phishing campaign.

 

The principles applied when writing the above hunting queries can be expanded to create further detections and insights. The latest hunting and detection queries developed by the Azure Sentinel team can be found on the Azure Sentinel GitHub.

 

If you’d like to learn more about how Microsoft is developing techniques to detect web shells, check out the following:

 

Integrating Azure Web Application firewall with Azure sentinel: https://techcommunity.microsoft.com/t5/azure-network-security/integrating-azure-web-application-firewall-with-azure-sentinel/ba-p/1720306 

 

Hunting for web shells using Azure Sentinel: https://techcommunity.microsoft.com/t5/azure-sentinel/web-shell-threat-hunting-with-azure-sentinel-and-microsoft/ba-p/1448065

 

Recent GADOLINIUM activity that utilised web shells: https://www.microsoft.com/security/blog/2020/09/24/gadolinium-detecting-empires-cloud/

 

More examples of major attacks utilising web shells: https://www.microsoft.com/security/blog/2020/02/04/ghost-in-the-shell-investigating-web-shell-attacks/

 

 

Protect your web applications from all attack vectors with Barracuda and Microsoft Azure

Protect your web applications from all attack vectors with Barracuda and Microsoft Azure

This article is contributed. See the original author and article here.

Organizations are increasingly moving to Microsoft Azure, and the recent rise in distributed workforces has accelerated the pace. Just a few years ago, most applications were lifted and shifted from on-premises to the cloud before companies started to favor deploying cloud-native applications. However, with the COVID-19 pandemic forcing more employees to work from home, the lift-and-shift approach to the cloud has once again gained momentum.

 

Tushar Richabadas, Senior Product Manager – Application Security, Barracuda, explains how Barracuda WAF-as-a-Service, available in the Microsoft Azure Marketplace, addresses some of the security issues caused by a rapid shift to the cloud from on-premises:

 

Companies are moving to the cloud faster than ever to accommodate the surge of employees working from home. Applications that were previously accessible only over a company’s intranet must now be accessible from the internet. However, security can often take a back seat when providing a distributed workforce with fast application access – causing problems later.

 

One such problem is possible security gaps. This necessitates products that secure deployments in the journey to the cloud. Starting from secure remote access to protecting against application attacks and DDoS, enterprises need solutions that are easy to deploy and fully automated for rapid deployments.

 

Barracuda CloudGen WAF for Azure and Barracuda WAF-as-a-Service protect web, mobile, and API applications against all application attacks, including zero-day attacks, DDoS attacks, and malicious bot attacks. Their Advanced Bot Protection capability uses Azure Machine Learning to identify and block advanced almost-human bots and account-takeover attempts.

 

BarracudaWAF.PNG

 

Looking for a solution your organization will manage? Barracuda CloudGen WAF for Azure can offload authentication and help enforce two-factor authentication (2FA), single sign-on (SSO), and multi-domain SSO. It can also communicate with Azure Active Directory, Active Directory Federation Services, or any LDAP, RADIUS, SAML, or OpenID Connect solution to provide access control to applications.

 

If you have a distributed workforce that needs to access applications from geo-dispersed locations, Barracuda WAF-as-a-Service offers the same proven application security, but as a service hosted on Azure. Barracuda WAF-as-a-Service is deployed in every Azure region worldwide and allows users to add endpoints to the regions closest to their customers and/or workforce to provide fast and secure application access.

Barracuda solutions for Azure are built to work in a near-native manner. CloudGen WAF for Azure delivers complete application security and integrates closely with Azure services, including virtual machine scale sets, Azure Active Directory, log analytics, Event Hubs, and more. Barracuda WAF-as-a-Service is built on Azure and provides the same level of security, as a service.

 

As e-commerce and distributed workforces continue to grow around the world, ensure your organization provides customers and remote employees with secure, reliable access to the applications they need to keep business moving forward.

 

Learn more about Barracuda WAF-as-a-Service and see a product demo in this on-demand webinar featuring Keith Vidal, Director Business Program Management, Microsoft; Nitzan Miron VP Product Management, Application Security, Barracuda Networks; and Shab Jahan, Product Marketing Manager, Public Cloud, Barracuda Networks.

Security Controls in Azure Security Center: Manage Access and Permissions

Security Controls in Azure Security Center: Manage Access and Permissions

This article is contributed. See the original author and article here.

Continuing our Secure Score series of blog posts, this post will discuss how to manage access and permissions from an Azure Security Center perspective and walk through the respective recommendations.

Access management for cloud resources is a critical function for any organization that is using the cloud. Using Role-based access control (RBAC), an authorization system built on Azure Resource Manager, is the best way to manage access to resources by creating role assignments. Azure role-based access control helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
In Azure Security Center, we have a dedicated security control named “Manage access and permissions”, which contains our best practices for different scopes.

 

Why manage access and permissions is so critical?

A core part of a security program is ensuring your users have the necessary access to do their jobs but no more than that: the least privilege access model. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, you can allow only certain actions at a particular scope.
You can control access to your resources by creating role assignments with role-based access control (RBAC). A role assignment consists of three elements:

  • Security principal: the object the user is requesting access to (for example, user or group)
  • Role definition: their permissions based on built-in or custom roles (for example: owner or contributor)
  • Scope: the set of resources to which the permissions apply (for example: subscription or management group)

As you learned in this blog post (blog series), recommendations are grouped in Security Controls.
In Azure Security Center, we have several recommendations based on 4 different scope: Subscriptions, Kubernetes, Storage accounts and Service Fabric resources. All of them are available as part of the “Manage access and permissions” security control which has the max score of 4 points.

 

What’s included within the Manage access and permissions security control?

Let’s dive into the available recommendations as part of this control. Each one is a built-in policy definition contained within the Azure Portal; all definitions are available in Azure Policy blade.

  • External accounts with write permissions should be removed from your subscription
  • External accounts with owner permissions should be removed from your subscription
  • Deprecated accounts should be removed from your subscription
  • Deprecated accounts with owner permissions should be removed from your subscription
  • There should be more than one owner assigned to your subscription
  • Service principals should be used to protect your subscriptions instead of Management Certificates [Preview]
  • Role-Based Access Control (RBAC) should be used on Kubernetes Services
  • Azure Policy Add-on for Kubernetes should be installed and enabled on your clusters [Preview]
  • Privileged containers should be avoided [Preview]
  • Least privileged Linux capabilities should be enforced for containers [Preview]
  • Immutable (read-only) root filesystem should be enforced for containers [Preview]
  • Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers [Preview]
  • Running containers as root user should be avoided [Preview]
  • Containers sharing sensitive host namespaces should be avoided [Preview]
  • Container with privilege escalation should be avoided [Preview]
  • Service Fabric clusters should only use Azure Active Directory for client authentication
  • Storage account public access should be disallowed [Preview]

As listed above, a subset of recommendations was recently released as “Preview”. Security Center no longer includes preview recommendations when calculating the Secure Score. Preview recommendations are still available to allow exploration and remediation of the unhealthy resources across your Azure subscriptions.

 

asc-manage-access-control.png

 

 

Category #1: Recommendations for Azure Subscriptions

An Azure subscription refers to the logical entity that provides entitlement to deploy and consume Azure resources. Like any other Azure service, a subscription is a resource which you can assign RBAC on. Azure Security Center provides access and permissions recommendations for subscriptions too. Those are breakdown to sub-categories: external accounts, deprecated accounts, and administrative accounts.

  • External accounts with permissions – Extremal accounts in Azure AD are accounts having different domain names than the one which is being used corporate identities (such as Azure AD B2B collaboration, Microsoft Accounts, etc.). Usefully, those accounts are not managed or monitored by the organization and can be targets for attackers looking to find ways to access your data without being noticed. Recommendations will suggest to remove external account with either of the following permissions: owner/write/read (classic-administrators permissions are part of owner role).
    Deprecated accounts –
    Security Center consider deprecated accounts as the ones which are  stored in Azure AD and have been blocked from signing-in. The same as the external account, these accounts can be targets for attackers looking to find ways to access your data without being noticed.
    Such accounts could have owner permissions on the subscription.
  • Administration accounts – One of our best practices is to have more than one owner assigned to a subscription in order to have administrator access redundancy and. Additionally, we recommend to have maximum of 3 owners, in order to reduce the potential for breach by a compromised owner.
    So, the recommendation is to have 2 owners per subscription. Currently, this recommendation check existence of direct assignment at the subscription and not the inherited ones by a management group. Moreover, security groups are currently not supported either. And lastly, to manage your subscriptions more securely, once you decide for the required owners, we recommend using user accounts or service principals rather than management certificates.

The following recommendations belong to this category:

  • External accounts with write permissions should be removed from your subscription
  • External accounts with owner permissions should be removed from your subscription
  • Deprecated accounts should be removed from your subscription
  • Deprecated accounts with owner permissions should be removed from your subscription
  • There should be more than one owner assigned to your subscription
  • Service principals should be used to protect your subscriptions instead of Management Certificates [Preview]

Category #2: Recommendations for Kubernetes

To ensure your Kubernetes workloads are secure by default, Security Center provided Kubernetes-level policies and hardening recommendations, including enforcement options with Kubernetes admission control. We recently announced the deprecation of preview AKS recommendation “Pod Security Policies should be defined on Kubernetes Services”. In favor, we replaced it with 13 new recommendations for AKS workload protection where 7 of them are part of the discussed security control. Those new recommendations allow you to audit or enforce them and are based on the Azure Policy Add-on for Kubernetes. This add-on extends Gatekeeper v3, to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner.

The new recommendations allow you to:

  • Provide granular filtering on the actions that users can perform by using RBAC to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies.
  • Reduce entry points for attacks and to spread malicious code or malware to compromised applications, hosts and networks.
  • Reduce attack surface of container by restricting Linux capabilities and granting specific privileges to containers without granting all the privileges of the root user.
  • Prevent unrestricted host access (privileged containers have all the root capabilities of a host machine).
  • Protect containers from changes at run-time with malicious binaries being added to PATH.
  • Prevent from an attacker to use root and exploit misconfigurations.
  • Protect against privilege escalation outside the container by avoiding pod access to sensitive host namespaces in a Kubernetes cluster.

During the preview phase, few of the above recommendations will be disabled by default. To enable them or adjust the settings to your needs, modify the “ASC Default” initiative assignment:

 

asc-aks-policies.png

 

The following recommendations belong to this category:

  • Role-Based Access Control (RBAC) should be used on Kubernetes Services
  • Azure Policy Add-on for Kubernetes should be installed and enabled on your clusters [Preview]
  • Privileged containers should be avoided [Preview]
  • Least privileged Linux capabilities should be enforced for containers [Preview]
  • Immutable (read-only) root filesystem should be enforced for containers [Preview]
  • Usage of pod HostPath volume mounts should be restricted to a known list to restrict node access from compromised containers [Preview]
  • Running containers as root user should be avoided [Preview]
  • Containers sharing sensitive host namespaces should be avoided [Preview]
  • Container with privilege escalation should be avoided [Preview]

Category #3: Recommendation for Storage accounts

By default, a storage account is configured to allow a user with the appropriate permissions to enable public access to a container.
When public access is allowed, a user with the appropriate permissions can modify a container’s public access setting to enable anonymous public access to the data in that container. Frequently, anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. Disallowing public access for the storage account prevents anonymous access to all containers and blobs in that account. Moreover, it prevents data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it.

The following recommendation belong to this category:

  • Storage account public access should be disallowed [Preview]

Category #4: Recommendation for Service Fabric

Azure Service Fabric allow few authentication options to secure the access to management endpoints from client  to cluster – to ensure that only authorized users can access the cluster and its management endpoint. Such options are certification authentication or Azure Active Directory authentication.
Azure Security Center recommend performing client authentication only via Azure Active Directory.

The following recommendation belong to this category:

  • Service Fabric clusters should only use Azure Active Directory for client authentication

 

Next Steps

In this blog we all recommendations related to manage access and permissions security control; from protecting subscriptions down to PaaS services like Kubernetes, Service Fabric and Storage accounts. To gain credit and increase your overall Secure Score, you must remediate all recommendations within the control.

Additionally, few recommendations like the Kubernetes one, were automatically configured with default parameters – please make sure to review and customize its values via Security Policy tab.

I hope you enjoyed this blog post and learned how this speisific control can assist you to strengthen your Azure security posture.

  • The main blog post to this series (found here)
  • The DOCs article about Secure Score (this one

Reviewers

Thanks to @Yuri Diogenes , Principal Program Manager in the CxE ASC team for reviewing this blog post.

How to Change collation for production Azure SQL databases

How to Change collation for production Azure SQL databases

This article is contributed. See the original author and article here.

In this article I will explain how to change the collation for your production Azure SQL Databases without loosing data/updates on the database with a minimum downtime.

 

Steps in brief:

 

  • Take a copy of the production database to a higher service tier ex: P11 on the same server.
  • Export the database to bacpac using SQLPackage. (It’s better to use a VM in the same region of the SQL server for less latency)
  • Modify the collation by editing the model.xml file.
  • Import the database again to a higher service tier ex: P11 by using SQLPackage and overriding the model.xml path.
  • Change the database name, or modify app connection string to use the new database.

 

Things to consider:

 

  • Azure SQL Database only supports changing collation by modifying the model.xml file for .bacpac files.
  • Schedule a maintenance window for your application during the process and stop the workload to prevent loosing updates on your database.
  • Do the export/import to/from databases with a higher service tier to boost the operation.
  • Use a VM in the same region to save latency time.
  • If your database is/was used for Data Sync Service, consider removing DSS object before exporting the database. Check: https://techcommunity.microsoft.com/t5/azure-database-support-blog/exporting-a-database-that-is-was-used-as-sql-data-sync-metadata/ba-p/369062
  • If your database is a part of Geo-DR replication, consider removing the Geo link and delete secondary database before starting the operation in order to create a new Geo replication and sync the new database with the new collation to the secondary server.

 

Steps in details:

 

1. Make sure you have a modern version of SQLPackage.

  • Latest SQLPackage version is here .
  • Sqlpackage installs to the C:Program FilesMicrosoft SQL Server150DACbin directory.

2. Start the maintenance window for your application.

 

3. Make a copy of the production database to a higher service tier ex: P11.

    How to copy SQL databases: https://docs.microsoft.com/en-us/azure/azure-sql/database/database-copy?tabs=azure-powershell#copy-using-the-azure-portal

 

4. Export the copied database using SQLPackage from a VM on the same region of your SQL server.

  • Open CMD > Navigate to SQLPackage location > ex: “C:Program FilesMicrosoft SQL Server150DACbin” .

cmd.PNG

  • Run the below command to export the database:

         sqlpackage.exe /Action:Export /ssn:tcp:<ServerName>.database.windows.net,1433 /sdn:<DatabaseName> /su:<UserName> /sp:<Password> /tf:<TargetFile> /p:Storage=File

 

5. Change the database collation in the bacpac model.xml file:

  • Open the .bacpac file using WinRAR without de-compress the file.

winrar_1.PNG

  • Copy the model.xml to a local folder “C:Tempmodel.xml”.

winrar_2.PNG

  • Edit the “C:Tempmodel.xml” with the desired collation and save the file.

model.PNG

For example:

From: <Property Name=”Collation” Value=”SQL_Latin1_General_CP1_CI_AS” />

To: <Property Name=”Collation” Value=”Hebrew_CI_AS” />

 

6. Run the import using sqlpackage.exe, and use the /ModelFilePath:C:Tempmodel.xml parameter to override the model.xml in the .bacpac.

For example:
sqlpackage.exe /Action:Import /tsn:<server>.database.windows.net /tdn:<database> /tu:<user> /tp:<password> /sf:”C:Tempdatabase.bacpac” /ModelFilePath:C:Tempmodel.xml /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P11

 

7. When the import operation completed, Change the database name, or modify the application connection string to use the new database.

 

8. Stop the maintenance window for your application and run workload.

 

9. Delete the copied and old databases.

Design a hybrid Domain Name System solution with Azure

This article is contributed. See the original author and article here.

Domain Name System (DNS) has a bad reputation for always being at fault if there are any issues with system connectivity and availability. This critical service translates “friendly” system names to network IP addresses, much like looking up a physical address or phone number in a phone book, by using the person’s name. We use a system to manage this so we don’t need to keep and update this information on every requesting device. Add a hybrid environment into the mix and it becomes a little more complicated. How do you make sure that the phone book has entries for both your on-premises systems and those hosted in Azure, and how are they both updated?

 

The hybrid reference architecture “Design a hybrid Domain Name System solution with Azure” helps you design an architecture that can handle both environments. It also covers some common considerations for:
– scalability
– availability
– manageability
– security
– DevOps 
– and cost.

 

I really like how this article captures some key components. Azure Bastion is included, for secure remote access from a public internet connection without having RDP ports open. It also splits Azure into one connected and one disconnected subscription, depending on whether the Azure resources need connectivity back to on-premises resources or not.

 

Recommendations

The article explains recommendations for:
Extending AD DS to Azure – Using Active Directory Integrated DNS zones to host records for both on-premises and Azure workloads.
Split-brain DNS – Enabling users to resolve a system name to the relevant Application Gateway public IP address or an internal load balancer address, depending on where their request originates from.
Using private DNS zones for a private link – Resolving systems names to the IP address of the load balancer, for Azure systems in the same subscription, via Azure DNS private DNS zones.
Autoregistration – Enabling the autoregistration of virtual machines, when configuring a VNet link with a private DNS zone, to remove the need to do this manually when new VMs are provisioned.

 

For more information:
1. Check out the article Design a hybrid Domain Name System solution with Azure, which also includes links to further detailed documentation.

2. Complete the Microsoft Learn module Implement DNS for Windows Server IaaS VMs

 

Experiencing Data Access issue in Azure Portal for Many Data Types – 09/28 – Investigating

This article is contributed. See the original author and article here.

Initial Update: Monday, 28 September 2020 23:31 UTC

We are aware of issues within Application Insights and are related to AAD. Customers in all Public & US gov region may experience Data Access issues and issues with Availability Test, Live Metrics, Work Item Integration, Distributed Tracing and Log Search Alerting.

  • Next Update: Before 09/29 02:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Vincent