This article is contributed. See the original author and article here.

Tom McElroy, Rob Mead – Microsoft Threat Intelligence Center


Thanks to Andrey Karpovsky, Ely Abramovitch, Ram Pliskin, Roberto Rodriguez and Ross Bevington for making this blog possible.


 


On March 2nd 2021 we released a demo as part of Microsoft Ignite Spring 2021 showing an investigation that brought together incidents from Azure Defender blob storage and Microsoft 356 Defender. If you haven’t seen this session yet, make sure to check it out before reading the blog to get the most value from the detection and hunting queries we’ll be discussing.


 


As part of this investigation, we saw how a custom Azure Sentinel detection allows us to identify additional files uploaded by the threat actor, and how we could correlate hashes across on-premises Microsoft 365 Defender data and Azure Defender cloud data


 


This blog post will take an in-depth look at some of the log sources we used behind the scenes to connect these events. We’ll also cover in more detail how to analyse blob and file storage logs. As well as looking at the log sources, we’ll explore some additional hunting queries and detections that can be added to your Azure Sentinel hunting arsenal. All of the queries within this post can be found liked at the bottom.


 


Blob and File Storage Overview


Blob and File storage on Azure provide storage that can be accessed via Azure storage explorer, the Azure portal, or in the case of File storage directly as a mapped network drive. Both file storage methods allow files to be uploaded, shared, and downloaded. After files are uploaded, a link to the file can be generated meaning that files hosted using these methods can easily be shared through messaging systems. To learn more about the different Azure storage methods click here.


 


A threat actor with access to legitimate credentials can use blob or file storage to facilitate command and control, use the storage account to exfiltrate data, or as we saw in the Ignite demo, stage additional malicious files to use in their campaign.


 


Blog storage logs are in the StorageBlobLogs table and file storage logs can be found in the StorageFileLogs table. These tables have the same schema. All the analytics in this blog will merge the tables together using the union operator. Regardless of your container choice, the analytics will still work.


 


The animation below shows how to enable the diagnostic setting for Blob storage, which will send file operations to Log Analytics. To enable the same logging for File storage, simply follow the same process but choose “File” under the storage account name on the Diagnostic settings page.


EnableAudit.gif


 


Accessing File Uploads


When file upload actions are performed, a log entry is created. For Blob storage the operation name PutBlob, indicates a file upload action. File uploads are logged differently, first a file container is created and then the bytes are written to the file. For the purpose of building file upload analytics, the PutRange operation can be used as an equivalent to PutBlob. Both PutRange and PutBlob indicate the files bytes were written to the storage account.


 


The query below will join the tables and then select file upload activity events.


 

union
StorageFileLogs,
StorageBlobLogs
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
| project TimeGenerated, AccountName, Uri, ResponseMd5, Protocol, StatusText, DurationMs, CallerIpAddress, UserAgentHeader, Type
| take 10

 


 


Capture1.PNG


 


After execution, the query will project columns that will be valuable for security analytics. We will take a deeper look at those columns as we build out our hunting and detection queries. A sample of the output can be seen in the image above.


 


During the Ignite demo we used information from this table to share file hashes with Microsoft 365 Defender. These additional hashes were generated by a detection, which extracted additional files uploaded by the threat actor, following the known-bad malicious file hash upload to Blob storage. To begin building this detection, we need to locate and parse malicious file upload alerts.


 


Parsing Malicious File Upload Data


When a file with a known-bad hash is uploaded to Blob or File storage, Azure Defender checks to see if the file has a known-bad file hash. If Azure Defender determines that the file is malicious based on its hash, it will generate a security alert which is logged to the SecurityAlert table in Azure Sentinel.


 


The query below can be used to find instances of this alert in Azure Sentinel.


 

SecurityAlert
| where TimeGenerated > ago(14d)
| where DisplayName has "Potential malware uploaded to"

 


The Entities field of the alert contains the uploader IP and country, alongside the file Name, Hash, and the Directory the file was uploaded to. The Entities field needs to be parsed so that we can use these elements.


 


The query below will extract the IP address and file information from the Entities field and will then re-join this information into a single row for each Azure Defender security alert.


 

//Collect the alert events
let alertData = SecurityAlert
| where TimeGenerated > ago(7d)
| where DisplayName has "Potential malware uploaded to"
| extend Entities = parse_json(Entities)
| mv-expand Entities;
//Parse the IP address data
let ipData = alertData
| where Entities['Type'] =~ "ip"
| extend AttackerIP = tostring(Entities['Address'])
| extend AttackerCountry = tostring(Entities['Location']['CountryName']);
//Parse the file data
let FileData = alertData
| where Entities['Type'] =~ "file"
| extend MaliciousFileDirectory = tostring(Entities['Directory'])
| extend MaliciousFileName = tostring(Entities['Name'])
| extend MaliciousFileHashes = tostring(Entities['FileHashes']);
//Combine the File and IP data together
ipData
| join (FileData) on VendorOriginalId
| summarize by TimeGenerated, AttackerIP, AttackerCountry, DisplayName, ResourceId, AlertType, MaliciousFileDirectory, MaliciousFileName, MaliciousFileHashes
//Create a type column so we can track if it was a File storage or blob storage upload
| extend type = iff(DisplayName has "file", "File", "Blob")

 


 


Capture2.PNG


Extracting and Preparing File Upload Data


We have extracted the threat actor’s IP and file hash information from the Entities field. Next, we will identify other upload activity from the malicious file uploader.


 


Earlier, we covered how PutBlob and PutRange operations can be used to find uploads. We can create a query that extracts file upload activity and summarises it. It can then be joined to the suspicious file upload alert, using the IP address of the malicious file uploader.


 


The query below will prepare the data stored in StorageBlobLogs and StorageFileLogs so it can be joined with the previous query. The query will parse the filename from the Uri field, decode the Base64 encoded file hash and then summarise all file uploads for each client IP into a single row for joining. While not used in the Ignite demo, the query will also extract file deletions by extracting DeleteFile and DeleteBlob operations.


 

union
StorageFileLogs,
StorageBlobLogs
| where TimeGenerated > ago(7d)
//File upload operations
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
//Parse out the uploader IP
| extend ClientIP = tostring(split(CallerIpAddress, ":", 0)[0])
//Extract the filename from the Uri
| extend FileName = extract(@"/([w-. ]+)?", 1, Uri)
//Base64 decode the MD5 filehash, we will encounter non-ascii hex so string operations don't work
//We can work around this by making it an array then converting it to hex from an int
| extend base64Char = base64_decode_toarray(ResponseMd5)
| mv-expand base64Char
| extend hexChar = tohex(toint(base64Char))
| extend hexChar = iff(strlen(hexChar) < 2, strcat("0", hexChar), hexChar)
| extend SourceTable = iff(OperationName has "range", "StorageFileLogs", "StorageBlobLogs")
| summarize make_list(hexChar) by CorrelationId, ResponseMd5, FileName, AccountName, TimeGenerated, RequestBodySize, ClientIP, SourceTable
| extend Md5Hash = strcat_array(list_hexChar, "")
//Pack the file information the summarise into a ClientIP row
| extend p = pack("FileName", FileName, "FileSize", RequestBodySize, "Md5Hash", Md5Hash, "Time", TimeGenerated, "SourceTable", SourceTable)
| summarize UploadedFileInfo=make_list(p), FilesUploaded=count() by ClientIP
| join kind=leftouter (
        union
        StorageFileLogs,
        StorageBlobLogs
        | where TimeGenerated > ago(7d)
        | where OperationName == "DeleteFile" or OperationName == "DeleteBlob"
        | extend ClientIP = tostring(split(CallerIpAddress, ":", 0)[0])
        | extend FileName = extract(@"/([w-. ]+)?", 1, Uri)
        | extend SourceTable = iff(OperationName has "range", "StorageFileLogs", "StorageBlobLogs")
        | extend p = pack("FileName", FileName, "Time", TimeGenerated, "SourceTable", SourceTable)
        | summarize DeletedFileInfo=make_list(p), FilesDeleted=count() by ClientIP
) on ClientIP

 


 


Capture3.PNG


 


 


Creating the Detection Query


Now that the alert information has been collated, and the additional file upload information has been summarised by IP address, it is possible to join the two queries.


 


The query below merges both queries into a single detection (including identifying other potentially malicious files that were uploaded). The detection will also output an IPCustomEntitiy and a FileHashCustomEntitiy so that the results of the detection can be added to the Azure Sentinel investigation graph.


 

//Collect the alert events
let alertData = SecurityAlert 
| where TimeGenerated > ago(30min) 
| where DisplayName has "Potential malware uploaded to" 
| extend Entities = parse_json(Entities) 
| mv-expand Entities;
//Parse the IP address data
let ipData = alertData 
| where Entities['Type'] =~ "ip" 
| extend AttackerIP = tostring(Entities['Address']), AttackerCountry = tostring(Entities['Location']['CountryName']);
//Parse the file data
let FileData = alertData 
| where Entities['Type'] =~ "file" 
| extend MaliciousFileDirectory = tostring(Entities['Directory']), MaliciousFileName = tostring(Entities['Name']), MaliciousFileHashes = tostring(Entities['FileHashes']);
//Combine the File and IP data together
ipData 
| join (FileData) on VendorOriginalId 
| summarize by TimeGenerated, AttackerIP, AttackerCountry, DisplayName, ResourceId, AlertType, MaliciousFileDirectory, MaliciousFileName, MaliciousFileHashes
//Create a type column so we can track if it was a File storage or blobl storage upload 
| extend type = iff(DisplayName has "file", "File", "Blob") 
| join (
  union
  StorageFileLogs, 
  StorageBlobLogs 
  | where TimeGenerated > ago(30min)
  //File upload operations 
  | where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
  //Parse out the uploader IP 
  | extend ClientIP = tostring(split(CallerIpAddress, ":", 0)[0])
  //Extract the filename from the Uri 
  | extend FileName = extract(@"/([w-. ]+)?", 1, Uri)
  //Base64 decode the MD5 filehash, we will encounter non-ascii hex so string operations don't work
  //We can work around this by making it an array then converting it to hex from an int 
  | extend base64Char = base64_decode_toarray(ResponseMd5) 
  | mv-expand base64Char 
  | extend hexChar = tohex(toint(base64Char))
  | extend hexChar = iff(strlen(hexChar) < 2, strcat("0", hexChar), hexChar) 
  | extend SourceTable = iff(OperationName has "range", "StorageFileLogs", "StorageBlobLogs") 
  | summarize make_list(hexChar) by CorrelationId, ResponseMd5, FileName, AccountName, TimeGenerated, RequestBodySize, ClientIP, SourceTable 
  | extend Md5Hash = strcat_array(list_hexChar, "")
  //Pack the file information the summarise into a ClientIP row 
  | extend p = pack("FileName", FileName, "FileSize", RequestBodySize, "Md5Hash", Md5Hash, "Time", TimeGenerated, "SourceTable", SourceTable) 
  | summarize UploadedFileInfo=make_list(p), FilesUploaded=count() by ClientIP 
      | join kind=leftouter (
        union
        StorageFileLogs,
        StorageBlobLogs         
        | where TimeGenerated > ago(30min)         
        | where OperationName =~ "DeleteFile" or OperationName =~ "DeleteBlob"         
        | extend ClientIP = tostring(split(CallerIpAddress, ":", 0)[0])         
        | extend FileName = extract(@"/([w-. ]+)?", 1, Uri)         
        | extend SourceTable = iff(OperationName has "range", "StorageFileLogs", "StorageBlobLogs")         
        | extend p = pack("FileName", FileName, "Time", TimeGenerated, "SourceTable", SourceTable)         
        | summarize DeletedFileInfo=make_list(p), FilesDeleted=count() by ClientIP
        ) on ClientIP
  ) on $left.AttackerIP == $right.ClientIP 
| mvexpand UploadedFileInfo 
| extend LinkedMaliciousFileName = UploadedFileInfo.FileName 
| extend LinkedMaliciousFileHash = UploadedFileInfo.Md5Hash     
| project AlertTimeGenerated = TimeGenerated, tostring(LinkedMaliciousFileName), tostring(LinkedMaliciousFileHash), AlertType, AttackerIP, AttackerCountry, MaliciousFileDirectory, MaliciousFileName, FilesUploaded, UploadedFileInfo 
| extend FileHashCustomEntity = LinkedMaliciousFileName,HashAlgo = "MD5", IPCustomEntity = AttackerIP

 


 


Capture4.PNG


In the above output, each row represents a file upload event that the threat actor IP has initiated. By processing the file hash, this can be added to the Azure Sentinel Investigation graph as a custom entity. In the Ignite demo, the hashes were then shared with Microsoft 365 Defender using a playbook. We won’t be covering playbooks in this blog, but can find out more about playbooks here.


 


Creating a Detection Analytic


Finally, the detection query can be set up as a custom analytic rule. This will allow the detection to run periodically, checking for Azure Defender alert. When a known-bad file hash incident from Azure Defender has been created, our analytic will run to produce a new alert containing the additional hashes uploaded.


 


Full details of how to create new analytics in Azure Sentinel can be found here. Once in the wizard for rule creation, the detection query can be added to the rule logic. The query will execute every 15 minutes in our example environment. As the query executes every 15 minutes, we can reduce the number of rows processed by setting the query to only collect the last 30 minutes of events.


 


Capture5.PNG


 


We want to generate an incident when this analytic returns results, so the Create incident option is toggled “on” under the Incident settings.


 


Capture6.PNG


Now, an automated response can be configured. In the Ignite demo we execute a playbook manually to share indicators using the Graph API. However behind the scenes, this sharing had already been done using Automated response options on the analytic rule. More information on automated threat response can be found here.


 


Whenever a malicious file is uploaded to blob or file storage, the detection will collect additional files uploaded to blob storage by the threat actors IP address. You will receive an incident if additional files are uploaded by the same IP address within a 30-minute window of the original malicious file upload alert.


 


In the image below you can see that after the blob storage alert triggered (Incident ID 8 ) our detection ran and created a new incident as additional file uploads were seen (Incident ID 9).


 


Capture13.PNG


Now we have a custom detection setup, we’ll look at some follow-on hunting queries to help uncover more of the compromise.


 


Hunting: Viewing Access Keys


Towards the end of the Ignite demo, we see that an analyst has investigated AzureActivity logs to determine other suspicious actions within our subscription. In their investigation, the analyst uncovers the malicious IP address using the GemmaG account to view blob storage keys.


 


AzureActivity is a platform log in Azure that provides insights into events that take place within the subscription. To access blob or file storage the threat actor had to obtain the access keys, enabling them to connect and upload files. Whenever keys for a storage account are accessed a “List Storage Account Keys” Operation is logged. An example of these events can be viewed with the following query.


 

AzureActivity
| where OperationName has "List Storage Account Keys"
| take 10

 


In the demo we see that the threat actor IP address has performed this operation using the GemmaG user account, allowing us to conclude that the threat actor compromised that account and used it to enable their cloud pivot.


 


While we have seen how AzureActivity can be used to enrich our investigation, we can also use this log to detect potentially suspicious Azure actions. In the query below we can use a list of Virtual Private Server (VPS) provider networks to check the logs for Administrative operations that have been conducted from a known VPS range, allowing us to detect potentially suspicious activity.


 


Virtual Private Servers are often abused by threat actors, they are a cheap and reliable way of acquiring infrastructure for operations. VPSs are often used to host websites and services, however they can also be configured to provide VPN and proxying capabilities. For most network environments, it is relatively unusual to see a known VPS provider IP ranges authenticating or performing user actions within the network.


 


While the list of VPSs compiled in the below query is not exhaustive, it provides many of the most common ranges, and is a good starting point when analysing network logs of unusual login or administrative behaviour.


 


The query below uses the ipv4_lookup plugin to evaluate IP addresses from our network logs against a list of known providers. If a match for a known VPS IP address is found, a row will be returned.


 

let IP_Data = (externaldata(network:string)
  [@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/VPS_Networks.csv"] with (format="csv"));
AzureActivity
| where CategoryValue =~ "Administrative"
| evaluate ipv4_lookup(IP_Data, CallerIpAddress, network, return_unmatched = false)
| summarize make_set(OperationNameValue), min(TimeGenerated), max(TimeGenerated) by CallerIpAddress, Caller

 


 


Hunting: File Distribution


In the Ignite demo we manually hunted for the compromised host using the Microsoft 365 Defender portal, this allowed us to make a connection between the servicehost file in cloud storage and the servicehost file on host. Microsoft 365 Defender Advanced hunting data could have allowed us to make this connection automatically.


 


Once malicious files have been identified in a storage account, we can use the Microsoft 365 Defender Advanced Hunting table DeviceFileEvents to determine if files were then spread to machines in the network. The DeviceFileEvents table contains information on file creation and modification, as well as other file system events.


 


Each time a file is created on a system, a log entry is made. As we have previously extracted the File MD5 hash from the storage logs, we can use the hash to join events from the DeviceFileEvents table with events from StorageFileLogs and StorageBlobLogs.


 


We need to prepare the data in DeviceFileEvents for joining. While there is lots of useful information in this table, we are only looking to make a link between a malicious file in storage and its presence on host. The query below will prepare the data in the DeviceFileEvents table for joining by extracting file create events which have an MD5 hash. It will then summarise the information about the device into a Kusto bag, creating a JSON structure we can expand later.


 

DeviceFileEvents
| where ActionType == "FileCreated"
| where isnotempty(MD5)
| extend p = pack("FileCreateTime", TimeGenerated, "Device", DeviceName, "DeviceId", DeviceId, "InititatingProcess", InitiatingProcessFileName)
| summarize make_bag(p), dcount(DeviceName) by MD5

 


 


 


Capture8.PNG


 


The above output shows the data has been prepared successfully for joining. We can now add this query into our previous query which extracts files from the StorageFileLogs and StorageBlobLogs tables.


 


Once joined we will uncover instances where risky uploaded by the same malicious IP address as the known-bad file, have been uploaded to our storage account and are downloaded to a machine within our network. In the Ignite demo, the threat actor was staging additional malicious tools to help further their campaign. This hunting query would allow us to identify additional compromised systems.


 

  union StorageFileLogs,
  StorageBlobLogs
  | where TimeGenerated > ago(7d)
  //File upload operations
  | where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
  //Parse out the uploader IP
  | extend ClientIP = tostring(split(CallerIpAddress, ":", 0)[0])
  //Extract the filename from the Uri
  | extend FileName = extract(@"/([w-. ]+)?", 1, Uri)
  //Base64 decode the MD5 filehash, we will encounter non-ascii hex so string operations don't work
  //We can work around this by making it an array then converting it to hex from an int
  | extend base64Char = base64_decode_toarray(ResponseMd5)
  | mv-expand base64Char
  | extend hexChar = tohex(toint(base64Char))
  | extend hexChar = iff(strlen(hexChar) < 2, strcat("0", hexChar), hexChar)
  | extend SourceTable = iff(OperationName has "range", "StorageFileLogs", "StorageBlobLogs")
  | summarize make_list(hexChar) by CorrelationId, ResponseMd5, FileName, AccountName, TimeGenerated, RequestBodySize, ClientIP, SourceTable
  | extend Md5Hash = strcat_array(list_hexChar, "")
  | project-away list_hexChar, ResponseMd5
  | join (
    DeviceFileEvents
    | where TimeGenerated > ago(7d)
    | where ActionType =~ "FileCreated"
    | where isnotempty(MD5)
    | extend p = pack("FileCreateTime", TimeGenerated, "Device", DeviceName, "DeviceId", DeviceId, "FileName", FileName, "InititatingProcess", InitiatingProcessFileName)
    | summarize make_bag(p), dcount(DeviceName) by MD5
  ) on $left.Md5Hash == $right.MD5
  | project TimeGenerated, FileName, FileHashCustomEntity=Md5Hash, AccountName, SourceTable, DevicesImpacted=dcount_DeviceName, Entitites=bag_p

 


 


 


Capture9.PNG


 


Executing the query will return a row for each file seen in blob or file storage. The row will indicate how many devices the file was downloaded to, alongside an Entities field that contains the impacted host information and the process used to initiate the file. For example, if in this data we saw that winword.exe was used to create the file on disk, it’s possible the file was dropped by the word process after the successful execution of malicious code.


 


Hunting: Implicated User Account


Storage logs do not record the user account that was used when the file upload event occurred, this is because a shared access token can be used which is not implicitly linked to a single user account. In instances where the threat actor uses the Azure Portal to upload and manipulate files within file or blob storage, we can create a hunting query that links sign in activity to the upload event.+


 


The below query will look back 1 minute prior to the file upload and then extract sign in entries from SigninLogs based on the IP address, the query will perform an additional check to ensure and the previous sign in match (this can be commented out if you suspect the threat actor is spoofing their user agent). The look back period can be adjusted at the top of the query, and the query can also be adapted to search for a specific file upload. This query will only successfully link sign in events if the upload used the Azure Portal.


 

let TimeRange = 7d;
//Period of time to look back in signin logs
let lookback = 1m;
let TargetFile = "mimikatz.exe";
union
StorageFileLogs,
StorageBlobLogs
| where TimeGenerated > ago(TimeRange)
//Collect file uploads
| where StatusText =~ "Success"
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
| extend FileName = extract(@"/([w-. ]+)?", 1, Uri)
//Uncomment below to enable file specific matching
//| where FileName =~ TargetFile
//Caller IP has the port appended, remove it
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":", 0)[0])
| extend FileUploadTime = TimeGenerated
| extend WindowStart = FileUploadTime - lookback
| join (
    SigninLogs
    | where TimeGenerated > ago(TimeRange)
    | project AzureLoginTime=TimeGenerated, UserPrincipalName, IPAddress, LoginUserAgent=UserAgent
) on $left.CallerIpAddress == $right.IPAddress
//Look back in the signinlogs for the most recent login
| where AzureLoginTime between (WindowStart .. FileUploadTime)
| project AccountUsed=UserPrincipalName, AzureLoginTime, OperationName, FileUploadPath=Uri, CallerIpAddress, LoginUserAgent, UploadUserAgent=UserAgentHeader
//Optional user agent check
| where LoginUserAgent =~ UploadUserAgent
//Pack and summarise the matching login events by the upload event
| extend p = pack("AccountUsed", AccountUsed, "AzureLoginTime", AzureLoginTime, "UserAgent", LoginUserAgent)
| summarize LoginEvents=make_bag(p) by FileUploadPath, OperationName, UploadUserAgent

 


 


 


Hunting: MSTICPy TILookup


The threshold for a malicious file upload alert from Azure Defender is set high, this is so that it does no generate large volumes of alerts, only alerting you to the most dangerous files. As part of the Ignite demo we saw how additional tools were uploaded by the attacker, while one of these triggered an alert, two of the tools did not reach the threshold for alerting. Once we identified additional files in our storage account, we could use a 3rd party service like VirusTotal to get additional insights into the files based on their hashes.


 


If the actor is using our blob storage for command and control, or if they are operating a large campaign, we may see many files and associated hashes, so we need a way to look them up programmatically.


 


A quick way to lookup multiple hashes is by using the MSTICPy TILookup feature. MSTICPy is a python library created by the Microsoft Threat Intelligence Center to help with cyber security data analysis. Once installed, MSTICPy provides access to a range of useful python tools to manipulate, enrich and pivot on data. MSTICPy can be found here on github.


 


One of the classes that is part of MSTICPy is TILookup. This class allows you to easily perform single IOC or multi-IOC lookups from Python. You can find more information about the TILookup class here, and full documentation for MSTICPy can be found here.


 


In this blog we are going to create a notebook that will connect to our Azure Sentinel Log Analytics environment, execute a Kusto query from Jupyter Notebooks using the MSTICPy query provider, and then send the hashes to VirusTotal to collect additional file enrichments. The complete notebook can be found here.


 


Upon execution the notebook will import the required packages and then allow you to select the workspace you want to investigate and the time range for the notebook to execute the query.


 


Capture10.PNG


 


Executing the next cell will authenticate with your workspace, simply copy and paste the code into the popup dialogue. Ensure you authenticate with a user account that has access to the Azure Sentinel Log analytics instance.


 


After successful authentication we can execute a Kusto query using the MSTICPy query provider, this is a modified version of our earlier query and will extract information from Azure Sentinel where a file was uploaded to blob or file storage.


 


Capture11.PNG


 


Now we have the hash data from Azure Sentinel, we can use the MSTICPy TIProvider to look up the hashes in VirusTotal. We can unique and convert the hashes to a list from the dataframe directly, we can invoke a new ti_lookup provider and then perform the lookup. The screenshot below shows the code and the expected output.


 


Capture12.PNG


  


Here we can see that two of the files seen in blob storage are marked as high severity. By enriching our storage files with information from VirusTotal we have gained additional insights into the malicious files uploaded to our blob or file storage. VirusTotal should not be used in place of an EDR solution, however this notebook can be used as part of an investigation.


 


Hunting: Suspicious File Operations


We have seen how to automatically enrich Azure Defender malicious file upload alerts, and how to investigate those incidents with hunting queries, but what happens if a file is uploaded that doesn’t match a known-bad hash?


 


There are a few approaches we can use to uncover potentially suspicious file activity without an alert triggering by looking for suspicious file operations.


 


File Exfiltration


When threat actors compromise file or blob storage accounts, they may use them for file exfiltration, uploading stolen data from your environment to the storage account. A common behaviour when exfiltrating data is to delete it from the victim network after it has been retrieved. This is to reduce the intrusions footprint on the compromised network and remove evidence of the compromise that may later be found during an investigation.


 


We can use storage logs of identify files that were uploaded and then deleted within a 5-minute window of the upload. This will allow us to find situations where automated file exfiltration and collection has taken place. The query below will return a row when such activity is detected


 

let threshold = 5m;
let timerange = 7d;
let StorageData =
union
StorageFileLogs,
StorageBlobLogs;
StorageData
| where TimeGenerated > ago(timerange)
| where StatusText =~ "Success"
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
| extend Uri = tostring(split(Uri, "?", 0)[0])
| join (
    StorageData
    | where TimeGenerated > ago(timerange)
    | where StatusText =~ "Success"
    | where OperationName =~ "DeleteBlob" or OperationName =~ "DeleteFile"
    | extend Uri = tostring(split(Uri, "?", 0)[0])
    | project OperationName, DeletedTime=TimeGenerated, Uri
) on Uri
| project TimeGenerated, DeletedTime, OperationName, OperationName1, Uri, CallerIpAddress, UserAgentHeader, ResponseMd5, AccountName
| extend windowEnd = TimeGenerated+5m 
| where DeletedTime between (TimeGenerated .. windowEnd)

 


 


Next, we can create a query that detects manual file exfiltration activity. Typically, files in cloud storage are accessed by many people, or multiple times, after they are uploaded. If the threat actor is using a manual file exfiltration system, we may be able to detect it by looking for instances where a file is uploaded, accessed by a single client IP, and then deleted. The query below will return a result when it detects this behaviour.


 

let threshold = 5m;
let timeRange = 15d;
//Union the file and blob data
let StorageData = 
union
StorageFileLogs,
StorageBlobLogs;
//Get file and blob uploads
StorageData
| where TimeGenerated > ago(timeRange)
//File upload operations 
| where StatusText =~ "Success" 
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
//Parse the URI to remove the parameters as they change per request 
| extend Uri = tostring(split(Uri, "?", 0)[0])
//Join with deletions, this will return 0 rows if there was no deletion 
| join (
   StorageData     
   |where TimeGenerated > ago(timeRange)    
   //File deletion operations     
   | where OperationName =~ "DeleteBlob" or OperationName =~ "DeleteFile"     
   | extend Uri = tostring(split(Uri, "?", 0)[0])     
   | project OperationName, DeletedTime=TimeGenerated, Uri, CallerIpAddress, UserAgentHeader
   ) on Uri 
| project UploadedTime=TimeGenerated, DeletedTime, OperationName, OperationName1, Uri, UploaderAccountName=AccountName, UploaderIP=CallerIpAddress, UploaderUA=UserAgentHeader, DeletionIP=CallerIpAddress1, DeletionUA=UserAgentHeader1, ResponseMd5
//Collect file access events where the file was only accessed by a single IP, a single downloader 
 join (
   StorageData 
   |where Category =~ "StorageRead" 
   |where TimeGenerated > ago(timeRange)
   //File download events 
   | where OperationName =~ "GetBlob" or OperationName =~ "GetFile"
   //Again, parse the URI to remove the parameters as they change per request 
   | extend Uri = tostring(split(Uri, "?", 0)[0])
   //Parse the caller IP as it contains the port 
   | extend CallerIpAddress = tostring(split(CallerIpAddress, ":", 0)[0])
   //Summarise the download events by the URI, we are only looking for instances where a single caller IP downloaded the file,
   //so we can safely use any() on the IP. 
   | summarize Downloads=count(), DownloadTimeStart=max(TimeGenerated), DownloadTimeEnd=min(TimeGenerated), DownloadIP=any(CallerIpAddress), DownloadUserAgents=make_set(UserAgentHeader), dcount(CallerIpAddress) by Uri 
   | where dcount_CallerIpAddress == 1
   ) on Uri 
| project UploadedTime, DeletedTime, OperationName, OperationName1, Uri, UploaderAccountName, UploaderIP, UploaderUA, DownloadTimeStart, DownloadTimeEnd, DownloadIP, DownloadUserAgents, DeletionIP, DeletionUA, ResponseMd5

 


 


 


Uploads from Suspicious IP Addresses


A threat actor is most likely to access the storage account from their own infrastructure. Earlier we built a small query to identify when VPN providers are used to access our storage keys; we can re-use components of that query to detect when a VPN IP is used to upload file to blob storage.


 

let IP_Data = (externaldata(network:string)   [@"https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Sample%20Data/Feeds/VPS_Networks.csv"] with (format="csv"));
union StorageFileLogs,
StorageBlobLogs
| where TimeGenerated > ago(7d)
//File upload operations
| where StatusText == "Success"
| where OperationName =~ "PutBlob" or OperationName =~ "PutRange"
| evaluate ipv4_lookup(IP_Data, CallerIpAddress, network, return_unmatched = false)
| summarize make_set(OperationName), min(TimeGenerated), max(TimeGenerated) by CallerIpAddress, Uri

 


 


Mass File Deletion


With an increase in Human Operated Ransomware, it’s becoming more likely that cybercrime groups will seek to damage or delete cloud backups prior to deploying ransomware to an on-prem network. If a cybercrime group can pivot to your cloud environment using stolen credentials, it is possible they will seek to damage, delete, or encrypt files stored in Azure Storage accounts.


 


Mass file deletion activity can be detected using File and Blob storage logs. The query below will collate File and Blob Storage events, and then return a result where the number of deletions from a single IP address in a specified window breaches a pre-defined threshold. In the example query below, if 3 or more deletions take place within a 10-minute window, a row will be returned.


 

let deleteThreshold = 3;
let deleteWindow = 10min;
union
StorageFileLogs,
StorageBlobLogs
| where TimeGenerated > ago(3d)
| where StatusText =~ "Success"
| where OperationName =~ "DeleteBlob" or OperationName =~ "DeleteFile"
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":", 0)[0])
| summarize dcount(Uri) by bin(TimeGenerated, deleteWindow), CallerIpAddress, UserAgentHeader, AccountName
| where dcount_Uri >= deleteThreshold
| project TimeGenerated, IPCustomEntity=CallerIpAddress, UserAgentHeader, FilesDeleted=dcount_Uri, AccountName

 


 


In Conclusion


In this blog we have expanded on many of the concepts touched on during the Ignite demo. We have covered how to replicate the alerts we saw as part of our Ignite investigation.


 


The Ignite demo showed a planned investigation following a single investigative path. This article has provided additional investigative steps and pivots, allowing you to explore the rich logging provided by Azure Storage. We built several hunting queries using Azure Storage logs to uncover additional malicious activity. Each of these queries shows how different data sources can be brought together in Azure Sentinel for an improved hunting experience.


 


Azure Activity Logs: Our investigation into Azure Activity logs uncovered the threat actor viewing access keys that enabled them to upload file to Azure Storage. This allowed us to identify the user account that had been compromised.


 


Microsoft 365 Defender Advanced Hunting: Using data from the DeviceFileEvents table provided by M365D, it was possible to identify which machines in our network the files hosted in storage were distributed to.


 


Sign in Logs: Merging data from storage logs with sign in events provided another mechanism for us to determine the likely user account used to upload the file based on IP and user agent correlation.


 


VirusTotal: Through MSTICPy we were able to execute multiple file hash queries against VirusTotal to provide further insight into the files found within our storage account. While in this example we used VirusTotal, any third party data source could be used to enhance our investigation through Azure Sentinel Connectors, or through API lookups using MSTICPy.


 


Hunting queries: We also created hunting queries to help detect potentially suspicious file activity, covering exfiltration of data from our network to mass deletion events that may be a sign of an impending ransomware attack.


 


This blog has shown how Azure Sentinel provides a broad view of compromises taking place in both on-prem and cloud environments. Azure Sentinel brings together data from disparate log sources and allows us to build powerful Kusto queries to hunt for and detect malicious activity.


 


GitHub Sentinel Queries


Detection: Additional Files Uploaded by Actor
Hunting: Azure Administration from VPS
Hunting: Azure Storage File Create, Access, Delete
Hunting: Azure Storage File Created then Quickly Deleted
Hunting: Azure Storage File on Endpoint
Hunting: Azure Storage Mass Deletion
Hunting: Azure Storage Upload from VPS
Hunting: Azure Storage Upload Link Account


 


Further Reading


Learn more about what’s new with Azure Sentinel in Sarah’s post here: Cloud SIEM Innovations from Azure Sentinel (microsoft.com)


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

%d bloggers like this: