Using Jupyter Notebook to analyze and visualize Azure Sentinel Analytics and Hunting Queries

Using Jupyter Notebook to analyze and visualize Azure Sentinel Analytics and Hunting Queries

This article is contributed. See the original author and article here.

Introduction


Azure Sentinel Github contains out of the box detections, exploration queries, hunting queries, workbooks, playbooks and much more to help you get ramped up with Azure Sentinel and provide you security content to secure your environment and hunt for threats.  In this blog, we will look at various Detections and Hunting Queries published in our public GitHub Repo , analyze and visualize the output to understand current MITRE ATT&CK® coverage, identify gaps etc. The goal is not necessarily to fill every gap but strategically identify opportunities to increase coverage to additional tactics, identify new data sources and  develop detections to increase the coverage drastically.


 


The entire workflow of the process can be visualized as shown in below diagram.


 


Ashwin_Patil_1-1602477048391.png


 


We will start with data acquisition which can be either directly from public community GitHub or you can also point to your private GitHub repository with queries if you have one. Another option is to use Azure Sentinel REST API to query your Azure Sentinel instance for Rule templates and Saved Searches which contains Hunting Queries. Once the data has been acquired, you can load into a dataframe for further cleaning and enrichment with MITRE dataset to finally receive some structured tabular data.  The resulting dataset is structured into tabular format which you can also send it back to LogAnalytics into a custom logs table or simply access via externaldata operator from accessible storage.


The datasets can also be visualized via following tools.



With the structured datasets, you can also create ATT&CK navigation layer json files to visualize it as an embedded iFrame within a Jupyter notebook or in independently in a browser.


 


Data Acquisition


You can retrieve Detections and hunting Queries via below 2 methods:


 


Download from GitHub.


In the first method, you can directly download template from the public Azure Sentinel GitHub repo. The templates are available within the Analytics pane and must be explicitly enabled and onboarded.  Alternatively, you can also point it to your private GitHub repository if available. If you want to read more about Azure Sentinel DevOps process, refer the blog Deploying and Managing Azure Sentinel as Code.


 


Jupyter notebook will start with Setup section where it will check pre-requisite, install missing packages and set up the environment. Initial cells contains function definitions which will be sequentially executed at later stages. Later you will come at Data Acquisition section.


In this section, you will be prompted for folder location to store the extracted files (Detections and Hunting Queries). By default, it will create a folder named mitre-attack under current folder. You can customize and specify location if you want. 


Jupyter notebook will start with Setup section where it will check pre-requisite, install missing packages and set up the environment. Initial cells contains function definitions which will be sequentially executed at alter stages. Later you will come at Data Acquisition section. In this section, you will be prompted for folder location to store the extracted files (Detections and Hunting Queries). By default,  it will store in current folder. You can customize and specify location if you want


 


01-downloadpath.png


 


A python function named get_sentinel_queries_from_github() is created which can be invoked to download the entire repository and only extract selected folder (Detection and Hunting Queries) in previously mentioned location. If you are using your private repo, provide the archived path to your repo in azsentinel_git_url containing Detections and Hunting Queries as child folders.


 


02-archiveextraction.png


 


You can also validate the operation is successful by manually running %ls in both the folders.


 


03-dir-listings.png


 


Retrieve via Azure Sentinel REST API – Experimental


In this method, you can use Azure Sentinel REST API to query your existing sentinel instance. For more information on using REST API, refer bog Azure Sentinel API 101. This section is currently marked  as Experimental, as there are some limitations with output received from API so you won’t be able to get same level of granular details as from above first step. You will have to first authenticate via az login.   Refer docs Sign in with Azure CLI if you need more details.  Once successfully authenticated, you can generate refresh token and use it with API headers to send REST API query to AlertRuleTemplates and SavedSeaches.


 


Known API Limitations:



  • Hunting Query is currently not part of REST API which is GA, but you can use LogAnalytics REST API saved searches and then filter out Hunting Queries.

  • In both cases, the data retrieved from savedSearches and AlertRuleTemplates, only Tactics is available and not techniques. You may have to pull techniques by joining with Github Dataset if rule template is available.


Below is stripped section of sample output retrieved from REST API – AlertRuleTemplates and hunting queries.


 


04-alert-via-api.png


 


05-huntingqueries-via-api.png


 


Data Cleaning and Enrichment


Once you acquire the data, you will have to perform various data cleaning steps to be eligible for analysis. All the published rule templates are in YAML format, which is already structured, but the values are often multiple in array or array of dictionary format. You will have to flatten these values to create dataset.


 


YAML Parsing


In the first step, we will parse all the YAML files and load it into dataframe. The Dataframe is nothing but a Python data structure similar to a spreadsheet in rows and column format. You can read more about dataframe datastructure at Intro to data structures – DataFrame. We have created a Python function named parse_yaml() which will take a folder containing YAML files. We will parse the files from both Detections and Hunting Queries and join both datasets. While doing this operation, we will also create additional columns to extend the YAML schema, such as GithubURL , IngestedDate, Detection Type, Detection Service etc. Finally, when you run this function, with some additional cleanup, it will display status similar to below.


 


06-githubstats.png


 


Clean and Preprocess Data


Once you have parsed the yaml files and load it into dataframe, we will further clean and preprocess the data. Example operations in this step can be as detailed below.



  • Separate existing columns and derive new columns such as ConnectorId, DataTypes.

  • Separating Null and valid ConnectorId values.

  • Expanding the columns (ConnectorId, DataTypes, Tactics, relevantTechniques ) which have multiple values assigned as array into separate rows for easier analysis.

  • Populate new column as Platform which will be based on custom mapping based on ConnectorId values e.g. AWSCloudTrail to AWS etc.  Snippet of example mapping is shown as below. This mapping can be changed to customize to your environment.


06-Platformmapping.png


 


You can then invoke function clean_and_preprocess_data() to execute this step. This function will return the dataset after cleaning and pre-processing original yaml files. We will also join the fusion dataset to the result dataset to create final dataset.


 


Below is example output from this step.


07-CleanandPreprocessdata.png


 


Enrich with MITRE Matrix


To find gaps in coverage , our original dataset needs to be joined with master dataset from MITRE ATT&CK matrix . We have created a function get_mitre_matrix_flat() that uses the Python library attackcti underneath with an additional data wrangling steps to return flat table as dataframe.


Below is example output from this function.


07-Mitre-matrix-flat.png


Merge preprocessed dataset with mitre datasets


In this step, we will merge our cleaned preprocessed dataset with the Mitre dataset. The resulting dataset will have our detections and hunting queries mapped to existing tactics & techniques, and will have blank lines where there are no matches.


09-Merge-data.png


 


After renaming columns and normalizing the datasets, output will look like the one below.


Remember you can have multiple rows for single detection if it has different values in Platform, ConnectorId, DataTypes. The resulting dataset will have single value in these columns so that it can be effectively analyzed and visualized. You can always aggregate them back into list/array to single row per detection.


 


10-displayresults.png


 


Scraping Detection List of Microsoft Threat Protection Portfolio


Microsoft has a variety of threat protection solutions in its portfolio. Depending upon licenses you may have one or multiple of these enabled in your environment. Azure Sentinel natively integrates with these solutions and ingests rules once connected under SecurityAlerts table. Some of these solutions  have published documentation on available rules under each on Microsoft docs website. These details can be easily scraped programmatically and create a result dataset. As we find more documentation, we will add other products to this list. Currently below services have rules published.


We have created python function for each of these and sample output are as shown below.



  • Azure Defender (previously ASC)


https://docs.microsoft.com/en-us/azure/security-center/alerts-reference


 


11-azure-defender.png



  • Azure Identity Protection Center


https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks


12-azure-ipc.png


 



  • Azure Defender for Identity (previously AATP)


https://docs.microsoft.com/en-us/azure-advanced-threat-protection/suspicious-activity-guide?tabs=external


13-azure-defender-identity.png


 



  • Microsoft Cloud Application Security (MCAS)


https://docs.microsoft.com/en-us/cloud-app-security/investigate-anomaly-alerts


 


14-mcas.png


 


You can later combine all above datasets into single dataframe, and display results as shown below.


This dataframe can be further used in visualization section.


 


15-merge-all-msft.PNG


 


Data Visualization


Now, we have all the data into a structured format, we can visualize it in a number of different ways and tools. The data can be sent back to LogAnalytics into custom table and later be visualized via either PowerBI or native Azure Sentinel Workbooks. Or you can also leverage the power of various Python visualization libraries within a Jupyter notebook. In the below screenshots, we have used matplotlib to visualize it as a heatmap, and used Plotly to visualize it into radar plots, donut charts etc. From the structured dataset, we can also automatically create ATT&CK navigation layer json files. These json files can be visualized as embedded iFrames within a Jupyter notebook or independently in a browser window. Check out below gif walkthrough of  visualizations. You can also check the notebook directly for more static and interactive view.


Alerts Table Viewer – Jupyter Widget


Part 5 Data Viz- Jupyter widget.gif


 


Heatmaps using Matplotlib.


Part 5 Data Viz-Heatmap.gif


 


ATT&CK Navigator Embedded Iframe view


Part 5 Data Viz -ATTACK Navigator.gif


Radar Plots with Plotly


Part 5 Data Viz-Radar plots.gif


 


 


Donut Plots with Plotly


Part 5 Data Viz-Donut Charts.gif


 


 


MITRE ATT&CK Workbook


This is an early version of the workbook available in GitHub which will be updated further for a more mature version and recommended instructions based on the workspace and datasources.


MITRE Workbook.gif


Uploading Results to Azure Sentinel


The data generated by data analysis (Alerts from Microsoft Services and Normalized Azure Sentinel Detections) can be uploaded back to Azure Sentinel. You can use data uploader feature of Msticpy for this. You can read more about it Uploading data to Azure Sentinel/Log Analytics.


 


23-Uploadresults.png


 


Conclusion


Azure Sentinel provides out-of-the box detection and hunting query templates via its public GitHub repo. Currently the repository holds around 362 queries for defenders. Depending on the environment and onboarded data sources, customer can choose these and enable it in their respective Azure Sentinel instance. As all the templates are already mapped to the MITRE ATT&CK Tactics and Techniques, you can analyze and visualize the detection coverage and identify potential gaps. The goal or outcome of this analysis is not necessarily to fill every gap but understand the coverage ,identify blind spots and strategically make decisions to create new detections or  onboard new data sources to increase the coverage. 


 


Special thanks to @ianhelle  for Sentinel API and @Cyb3rWard0g for ATT&CK Navigation layer section in the notebook.


References



  • MITRE ATT&CK for Azure Sentinel Resources (Notebook, Supporting Files)


https://github.com/Azure/Azure-Sentinel/tree/master/Sample%20Data/MITRE%20ATT%26CK



  • MITRE ATT&CK Workbook


https://github.com/Azure/Azure-Sentinel/blob/master/Workbooks/MITREAttack.json



  • ATT&CK Navigator


https://github.com/mitre-attack/attack-navigator



  • Sign in with Azure CLI


https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli



  • Intro to Data Structures – DataFrame


https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe



  • Automate the Creation of ATT&CK Navigator Group Layer Files with Python


https://medium.com/threat-hunters-forge/automate-the-creation-of-att-ck-navigator-group-layer-files-with-python-3b16a11a47cf



  • Deploying and Managing Azure Sentinel as Code


https://techcommunity.microsoft.com/t5/azure-sentinel/deploying-and-managing-azure-sentinel-as-code/ba-p/1131928



  • Azure Sentinel REST API 101


https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-api-101/ba-p/1438928



  • Plotly – Radar Charts in Python


https://plotly.com/python/radar-chart/



  • Plotly – Donut Chart


https://plotly.com/python/pie-charts/


 

Upcoming Event: Driving Collaboration in the Evolving Healthcare Workplace

Upcoming Event: Driving Collaboration in the Evolving Healthcare Workplace

This article is contributed. See the original author and article here.

event.PNG



When planning your transition back to the office, how can you keep employees engaged while optimizing your collaboration strategy? Join Microsoft, Kinly, and Pexip for a live virtual event October 21 to help you reduce complexity, improve productivity, and build a future-proof strategy for today’s evolving workplace.


 


You’ll learn:



  • Why employee engagement is critical for distributed teams

  • What the evolving workplace means for your long-term collaboration strategy

  • How to maximize existing AV investments with interop for Microsoft Teams

  • How external meetings (with third-party payers or providers) can be just as effective over video

  • How to modernize your office spaces with the latest collaboration tools including Surface Hub 2S





 


Meet the Speakers:


0.jpg




Shawn Remacle


US Provider CIO

Microsoft





 

0 (1).jpg




Kyle Forney


Solution Architect

Pexip





 

Olly.jpeg




Olly Henderson


Sales Director – Microsoft Lead US

Kinly





 

Frank Buchholz.jpeg




Frank Buchholz


Sr. Product Marketing Manager – Surface

Microsoft





 

0 (2).jpg



Jonathan Lickiss


Product Marketing Manager – Microsoft Surface

Microsoft

 


 


Click here to save your seat today!





Top 10 Best Practices for Azure Security

This article is contributed. See the original author and article here.

Our Azure products and services come with comprehensive security features and configuration settings. They are mostly customizable (to a point), so you can define and implement a security posture that reflects the need of your organization. But adopting & maintaining a good security posture goes far beyond turning on the right settings.


 


Mark Simos, lead Cyber security architect for Microsoft, explored the lessons learned from protecting both Microsoft’s own technology environments and the responsibility we have to our customers, and shares the top 10 (+1!) recommendations for Azure security best practices.


 


Here’s the list:


1. People: Educate teams about the cloud security journey


2. People: Educate teams on cloud security technology


3. Process: Assign accountability for cloud security decisions


4. Process: Update Incident Response (IR) processes for cloud


5. Process: Establish security posture management


6. Technology: Require Passwordless or Multi-Factor-Authentication


7. Technology: Integrate native firewall and network security


8. Technology: Integrate native threat detection


9. Architecture: Standardize on a single directory and identity


10. Architecture: Use identity based access control (instead of keys)


11. Architecture: Establish a single unified security strategy


 


These best practices have been included as a resource in the Microsoft Cloud Adoption Framework for Azure, where you can get more details on the what, why, who and how of each of these points. Get the details here.


 


I love that this is broken into people, process, technology and architecture. While statistics prove that capabilities like Multi-Factor Authentication significantly reduce security risk, both people and processes are crucial to protecting from and responding to security threats.   


 


Some of those points look clear and simple on the surface, but may be the hardest to implement in your organization (like assigning accountability for cloud security decisions). Or you may have many of the people and process items already in place for an on-premises environment – these are just as valid for on-prem or hybrid environments too.


 


Don’t brush this off as too simple and not worth your time. Locking the front door of your house is a simple but effective habit for increasing the security of your home. Complex technology systems can also benefit from organizations having the simplest, most effective people and process elements too.


 


Prefer to watch a video? Let Mark take you through the details of each tip:




How to handle Pandas breaking change on version 0.14

This article is contributed. See the original author and article here.

When you are using Pandas in a notebook you may notice there was a change in the Arrow IPC format from 0.15.1 onwards.


You can find more information here: https://arrow.apache.org/blog/2019/10/06/0.15.0-release/


 


So this a small post how to work around this:


 


The customer was using the example available here:


https://spark.apache.org/docs/2.4.0/sql-pyspark-pandas-with-arrow.html


Specifically this one:


 


 


 


 

import pandas as pd

from pyspark.sql.functions import col, pandas_udf
from pyspark.sql.types import LongType

# Declare the function and create the UDF
def multiply_func(a, b):
    return a * b

multiply = pandas_udf(multiply_func, returnType=LongType())

# The function for a pandas_udf should be able to execute with local Pandas data
x = pd.Series([1, 2, 3])
print(multiply_func(x, x))

# Create a Spark DataFrame, 'spark' is an existing SparkSession
df = spark.createDataFrame(pd.DataFrame(x, columns=["x"]))

# Execute function as a Spark vectorized UDF
df.select(multiply(col("x"), col("x"))).show()

 


 


 


When tried to run in synapse it failed with the error:


Py4JJavaError : An error occurred while calling o205.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 2.0 failed 4 times, most recent failure: Lost task 1.3 in stage 2.0 (TID 15, c671bd6ddc35b7487900238907316, executor 1): java.lang.IllegalArgumentException at java.nio.ByteBuffer.allocate(ByteBuffer.java:334)


 


Synapse Spark uses the library documented here:


https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-version-support


 


The workaround to use the the legacy Panda version is:


1) Add this to your code:


 


 

    import os
    os.environ ['ARROW_PRE_0_15_IPC_FORMAT=']='1'

 


 


and for that example, specific replace  show per display like:


 


 

#Instead of:
df.select(multiply(col("x"), col("x"))).show()

#use
display(df.select(multiply(col("x"), col("x")))) 

 


You can also replace the whl files:


 https://pypi.org/project/pyarrow/0.14.1/#files


Instructions here:


 https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-portal-add-libraries#manage-a-python-wheel


 


That is it!


Liliam 


UK Engineer

Optimize storage usage for large volumes of IoT data with Azure SQL

Optimize storage usage for large volumes of IoT data with Azure SQL

This article is contributed. See the original author and article here.

IoT with Azure SQL


 


IoT solutions are generally producing large data volumes, from device to cloud messages in telemetry scenarios to device twins or commands that need to be persisted and retrieved from users and applications. Whether we need to store data in “raw” formats (like JSON messages emitted by different devices), to preserve and process the original information, or we can shred attributes in messages into more schematized and relational table structures, it is not uncommon that these solutions will have to deal with billions of events/messages and terabytes of data to manage, and you may be looking for options to optimize your storage tier for costs, data volume and performance.


Depending on your requirements, you can certainly think about persisting less-frequently accessed, raw data fragments into a less expensive storage layer like Azure Blob Storage and process them through an analytical engine like Spark in Azure Synapse Analytics (in a canonical modern data warehouse or lambda architecture scenario). That works well for data transformations or non-interactive, batch analytics scenarios but, in case you instead need to frequently access and query both raw and structured data, you can definitely leverage some of the capabilities offered by Azure SQL Database as a serving layer to simplify your solution architecture and persist both warm and cold data in the same store.


 


Raw JSON in the database


 


Azure SQL supports storing raw JSON messages in the database as NVARCHAR(MAX) data types that can be inserted into tables or provided as an argument in stored procedures using standard Transact-SQL. Any client-side language or library that works with string data in Azure SQL will also work with JSON data. JSON can be stored in any table that supports the NVARCHAR type, such as Memory-optimized or Columnstore-based tables and does not introduce any constraint either in the client-side code or in the database layer.


To optimize storage for large volumes of IoT data, we can leverage clustered columnstore indexes and their intrinsic data compression benefits to dramatically reduce storage needs. On text-based data, it’s not uncommon to get more than 20x compression ratio, depending on your data shape and form.


In a benchmarking exercise on real world IoT solution data, we’ve been able to achieve ~25x reduction in storage usage by storing 3 million, 1.6KB messages (around 5GB total amount) in a ~200MB columnstore-based table.


 


Optimize data retrieval


 


Storing large amount of data efficiently may not be enough, if then we’re struggling to extract great insights and analysis from that. One of the major benefits of having battle-tested query processor and storage engine is to have several options to optimize our data access path.


By leveraging JSON functions in Transact-SQL, we can treat data formatted as JSON as any other SQL data type, and extract values from the JSON text to use JSON data in the SELECT list or in a search predicate. As Columnstore-based tables are optimized for scans and aggregations rather than key lookup queries, we can also create computed columns based on JSON functions that will then expose as regular relational columns specific attributes within the original JSON column, simplifying query design and development. We can further optimize data retrieval by creating regular (row-based) non clustered indexes on computed columns to support critical queries and access paths. While these will slightly increase overall storage needs, they will help query processor to filter rows on key lookups and range scans, and can also help on other operations like aggregations and such. Notice that you can add computed columns and related indexes at any time, to find the right tradeoff for your own solution requirements.


 


Best of both world


 


In case your JSON structure is stable and known upfront, the best option would be to design our relational schema to accommodate the most relevant attributes from JSON data, and leverage the OPENJSON function to transform these attributes to row fields when inserting new data.


 


 


Screenshot 2020-10-09 164220.png


 


 


These will be fully relational columns (with optimized SQL data types) that can be used for all sort of retrieval and analytical purposes, from complex filtering to aggregations, and that will be properly indexes to support various access paths. We can still decide to keep the entire JSON fragment and store it in a VARCHAR(max) field in the same table if further processing may be needed.


 


For other articles in this series see:


Ingest millions of events per second on Azure SQL leveraging Shock Absorber pattern

Ingest millions of events per second on Azure SQL leveraging Shock Absorber pattern

This article is contributed. See the original author and article here.

IoT with Azure SQL


 


IoT workloads can be characterized by high rates of input data, on both steady and burst streams, to be ingested from devices. A common design pattern for relational database systems involves a “landing zone” (or staging area) which is optimized for “absorbing the shock” of this high and spikey input rate, before data will be flowing into their its final destination (usually one or more tables) which instead is optimized for persistence and analysis.


There are few knobs we can use to optimize our data models for these very different purposes, where likely the most evident involves different indexing approaches (a light one to privilege higher ingestion rate versus a more comprehensive to favor query and retrieval), but Azure SQL also provides very specialized capabilities like In-memory OLTP tables and Columnstore Indexes that can be used together to implement shock absorber patters in IoT solutions:


 


Screenshot 2020-10-09 165015.png


 


In-memory OLTP


 


By leveraging In-memory OLTP tables (only available on Premium/Business Critical databases) as a landing zone for incoming streams of events and messages from IoT devices, we can largely eliminate latching and locking contention that can affect regular tables (subject to things like last page insert scenarios). Another benefit of this approach is the ability to reduce the overhead of transaction log operations by minimizing IOs for SCHEMA_ONLY tables (where the risk of losing some events in case of a failure is acceptable) or reducing the impact of log updates for tables with schema and data persistence. By combining these capabilities with natively compiled stored procedures, we can also improve execution performance for all critical data management operations on ingested data. A common solution architecture in this space would likely include an event store (something like Azure IoT Hub or EventHub) where events will flow from devices, and an event processor pulling events from the store and using a memory optimized table valued parameter, batching 100s or 1000s of incoming invents into a single data structure, greatly reducing the number of database roundtrips and improving throughputs to potentially millions of events per second.


 


Columnstore Indexes


 


With billions of events potentially ingested per day, max In-memory OLTP tables size (which is proportional to Azure SQL compute size utilized, 52GB on BC_Gen5_40 database as an example) will easily become a limit, so a very common approach is to leverage those for the “hot” portion of the dataset (basically, newly ingested data), while offloading events to disk-based tables, optimized for persistence and analytics, through a bulk loading process to a Columnstore-based table that can, for example, be triggered by time or data volume.


By picking up the right batch size (e.g. between 102,400 and 1,048,576, depending on our events generation rate and logic) we can maximize efficiency by eliminating the need for moving new data rows into Columnstore’s delta rowgroup first, waiting for the Tuple Mover to kick in and compress them, and going instead straight to a compressed rowgroup reducing logging and increasing overall throughput easily by 10x, while also achieving similar level of data compression (which is a significant side effect while you’re dealing with 100s of billion events).


Once offloaded to a Columnstore-based table, events can be efficiently analyzed by running aggregations and time-series analytical workloads without impacting the ingestion process.


 


Benefits


 


By adopting this design pattern for your IoT solutions, you can leverage a familiar relational engine environment while scaling to billions of events ingested per day and terabytes of compressed historical data stored. As a reference, you can start by looking at this packaged sample where we achieved a sustained 4.2 million events per second rate on average on a M-series 128 cores database.


 


 


Screenshot 2020-10-09 165305.png


 


 


For other articles in this series see:



 

Build your full-PaaS IoT solution with Azure SQL Database

Build your full-PaaS IoT solution with Azure SQL Database

This article is contributed. See the original author and article here.

IoT with Azure SQL


 


IoT solutions usually include several components that spans from device communication and management to event processing and data ingestion and analysis. Deploying all these components independently on compute, network and storage based infrastructures can be a complex and time consuming task, but nothing compared to manage, monitor and operate them at significant scale.


 


Screenshot 2020-10-08 155829.png


 


If we look at a canonical IoT architecture, like the above, we can identify and describe roles and responsibilities of each of these components:



  • Individual and edge devices, from intelligent sensors to industrial PCs aggregating multiple signals, send telemetry messages and receive commands from the cloud.

  • Cloud gateway is managing bi-directional communications with devices, queueing events and messages, but usually also device provisioning and management.

  • Event processing is usually performed as real-time streaming for capturing low latency events like alarm and thresholds to be plotted on a dashboard, but can also be through batch processing for large amount of data that can be stored for future analysis.

  • Warm storage, plays a role in persisting low latency events and messages that need to be ingested efficiently and at a high rate, but also be query-able concurrently by a combination of point lookup and aggregated queries, mostly comparing trends and values over time (hybrid workload, or HTAP).

  • Cold storage is optimized for large data volumes at reasonable cost, most likely retaining raw original messages that can be further processed and curated by Big Data and analytical engine to extract insights.


Thankfully, Azure platform offers a full portfolio of PaaS services that plug in to each other seamlessly and are covering all the needs of a modern and scalable IoT solution, as the following architectural diagram represents:


 


Picture4.png


 


From device connectivity and management, IoT Hub provides secure and reliable communication for your Internet of Things (IoT) applications, connecting virtually any device and offering comprehensive per-device authentication options, built-in device management, predictable performance and availability.


IoT Hub, built on Azure Event Hub as event store, can manage from few 100s to million of events per second, that can be consumed through several APIs and services.


 


Azure Stream Analytics, as an example, is a real-time analytics service designed for mission-critical workloads to build end-to-end serverless streaming pipelines with just a few clicks. With Stream Analytics, your IoT solution can analyze and react in real time to specific event patterns or thresholds generated by your device fleet, and be used to populate dashboard and reports capturing low-latency phenomenon in the system. Stream processing results can also be persisted directly in Azure SQL’s disk-based or In-memory Optimized tables for low latency and high throughput, to be subsequently queried and analyzed. Azure Stream Analytics uses a SQL dialect, extensible with custom code, to cover the most advanced scenarios, and can scale up and down to cover the most challenging workloads.


 


To process the remaining vast amount of all the events generated by devices that needs to be captured, stored and analyzed, you can used another PaaS service like Azure Functions, an event-driven serverless compute platform that natively connects with IoT Hub event store and can consume batches of incoming messages and process them following your customizable logic written in one of the many programming languages supported. Azure Functions can be bound to the incoming message flow that will trigger processing on one end, and can use Azure SQL client libraries on the other end to define data shape and form before persisting what’s needed in relational or non-relational format in the database.


 


Azure SQL proven to be able to ingest million of events per second in this scenario, combining In-memory technologies to speed up data insertion with Columnar index structures that can help optimizing storage utilization (through up to 100x compression) and support for large aggregation and time-series analysis.


You have multiple options to scale an Azure SQL Database instance, depending on your workload and requirements, Both single databases and managed instances can be scaled up (for compute or storage, independently) or scaled out, through read scale-out replicas or database sharding.


 


Some of the features you may find relevant while designing IoT solutions with Azure SQL Database are:



  • A single instance can scale up to 128 vCores (with M-Series hardware configuration) or 100 TB (with Hyperscale service tier). This means ingesting million of messages/sec and storing trillions of them in a single database instance, simplifying your data management operations.

  • Multiple secondary replicas can be added to scale out read workloads and support 10Ks of concurrent queries on ingested data.

  • Where additional scalability is required, Azure SQL Database provides Elastic Database tools to partition messages (e.g. using device or message ID sharding keys) across multiple database instances, providing linear scale for compute and storage.

  • When ingesting messages from 100Ks devices, Azure SQL Database provides the ability to batch multiple requests into a single database interaction, increasing overall scalability and maximizing resource utilization.

  • In-Memory technologies in Azure SQL Database let you achieve significant performance improvements with various workloads, including transactional, analytical and hybrid (HTAP). In-memory OLTP optimized tables help increasing number of transactions per second and reduce latency for scenarios like large data ingestion from IoT devices. Clustered ColumnStore indexes help reduce storage footprint through compression (up to 10 times) and improve performance for reporting and analytics queries on ingested messages.

  • Azure SQL Database scales well on both relational and non-relational data structures. Multi-model databases enable you to store and work with data represented in multiple data formats such as relational data, graphs, JSON/XML documents, key-value pairs, etc and still benefit from all capabilities described before, like In-memory technologies. See more on multi-model support.

  • In many IoT scenarios, historical analysis of ingested data is an important part of database workload. Temporal Tables are a feature of Azure SQL Database that allows to track and analyze full history of your data points, without the need for custom coding. By keeping data closely related to time context, stored data points can be interpreted as valid only within the specific period. This property of Temporal Tables allows for efficient time-based analysis and getting insights from data evolution.


Other services within the Azure platform can be plugged-in with minimum effort while building end-to-end IoT solutions, a great example can be Azure Logic Apps. Logic Apps can help automating critical workflows without writing a single line of code, by providing an immense set of out-of-the-box connectors to other services (like Azure SQL) and the ability to graphically design complex process that can be used in areas like data integration, management or orchestration. Things like triggering dataset refresh in PowerBI when certain events are happening at the database level become very simple and intuitive to implement thanks to Logic Apps.


Last but not least, Azure Machine Learning can be used to create and train complex ML model using datasets processed from IoT data and events, and hosted on Azure SQL or Azure Blob Storage for data scientists to use.


 


Who’s using this?


 


RXR Realty is the third-largest real estate owner in New York City, with over 25 million square feet of space across the tri-state area, including some of the most iconic addresses in Manhattan. When the COVID-19 pandemic hit, the company needed a way to integrate new safety measures for tenants after its buildings reopened for business. Working with key partners McKinsey & Co., Infosys, Rigado, and Microsoft, RXR used Microsoft Azure to create and deploy an intelligent, secure, hyperscalable solution—in just a few months.


The solution is named RxWell™—a comprehensive, public-health–based, data-driven program that merges physical and digital assets to help keep employees informed and supported during the “new abnormal” and beyond. Powered by the Internet of Things (IoT) and the intelligent edge, and firmly rooted in responsible artificial intelligence (AI) principles, RxWell combines real-time computer vision, sensors, AI, mobile apps and dashboards, and in-person service offerings. And Azure is making it all possible.


 


Screenshot 2020-10-08 162416.png


 


RXR Realty is using Azure SQL Database as warm storage where events flow through Azure Stream Analytics and Azure Function to power multiple dashboards and reporting applications on mobile and embedded devices on-site.


 


Schneider Electric, and his partner ThoughtWire, created an end-to-end solution for facilities and clinical operations management in healthcare settings. The solution uses Internet of Things (IoT) technology and Microsoft Azure cloud services to help hospitals and clinics reduce costs, minimize their carbon footprint, promote better patient experiences and outcomes, and increase staff satisfaction. The ThoughtWire application suite uses Azure Virtual Machines infrastructure as a service (IaaS) capabilities to run virtual and containerized workloads that support the ThoughtWire digital twin platform and Smart Hospital Suite in Azure. It also uses Azure SQL Database, which provides the right security controls for solution benefits like encrypting data at rest.


 


HB Reavis uses the power of Microsoft Azure IoT Hub, Azure SQL Database, and Stream Analytics. It captures complex IoT metrics essential for monitoring and analyzing diverse aspects of efficient workspace, such as office utilization, environmental insights (CO2, temperature, humidity), and general team dynamics and interactions. Once this data is collected, the platform converts it into actionable insights and visualizes the results on interactive Microsoft Power BI dashboards.


 


VERSE Technology is building an edge-to-cloud solution based on Microsoft Azure for one of the world’s largest baking companies, Grupo Bimbo. In the past year it has converted 15 factories, with many more scheduled soon. Now, with all this data being processed, analyzed, and visualized in real time, Bimbo can get an unprecedented, accurate picture of the company’s production process and begin transforming its business.


 


Screenshot 2020-10-08 162518.png


 


VERSE has designed a connected cloud solution for Bimbo based on Azure. Azure IoT Hub connects all the data streamed to the platform from different edge devices and sensors. Azure SQL Database is the highly scalable solution for storing RPMs, temperature, gas consumption, and more. This first phase encompasses more than 1,000 devices at the edge, with a range of communication protocols––RF, cellular, and Wi-Fi according to the needs of the location. The company plans to expand to more than 10,000 devices soon.


 


In this series, we’ll go dig deeper into how to build scalable and efficient IoT solutions leveraging Azure SQL, here are other articles on this subject:  



 

Dharma Ransomware: Recovery and Preventative Measures

Dharma Ransomware: Recovery and Preventative Measures

This article is contributed. See the original author and article here.

 


This is John Barbare and I am a Sr Customer Engineer at Microsoft focusing on all things in the Cybersecurity space. In the last several months, I have been getting a lot of requests around certain Ransomware that steals credentials through targeting phishing campaigns, extracting credentials to get Domain Admin access, and then encrypts files in SharePoint/OneDrive for a ransom. One of my first questions I ask is, “are you using Multi Factor Authentication (MFA)  for your high privileged accounts/credentials” and the answer comes back with “we are using long and extraordinarily complex passwords that no one can crack.” My take on this response and has been for a very long time is if you create the longest and most complex passwords 1) the user is not going to remember it thus writing it down and storing it under the keyboard and/or 2) keeping a notepad file on the desktop named “Passwords.txt” with a list of all the long and hard to remember passwords to easily have access to so they don’t bug anyone to reset the password (they forget it) or to easily copy/paste in the password fieldYes, I have seen this happed in lots of places I have visited in the last XX years and it is a common practice still as of todayWith the following information, I hope this article changes the way you or anyone secures high privileged accounts. 


 


The first part of this article I will break down one variant of a certain type of human operated Ransomware and how it gains access and then will demo how to recover your files if they become encrypted in SharePoint Online. Once I do a demo, I will then show you the best proactive approach to securing high privileged account MFA. Then in closing I will use the defense in depth approach and layer the best settings to apply in the M365 security stack to block the attacker at the front door. With that said, let’s start talking about Ransomware and getting things locked down 


 


Stopping Attacks by using MFAStopping Attacks by using MFA


 


MITRE ATT&CK Techniques Observed 


 


The below diagram shows a type of human operated Ransomware most similar to Dharma. Ransomware families such as REvilSamasBitpaymerDoppelPaymer, Dharma, and Ryuk are deployed by human operators, which has spiraled in the last several months. Following the top left of the diagram and moving down the kill chain depicts how the human operated Ransomware achieves its end state – extremely similar to Dharma Ransomware. To better understand how Dharma can infect your machines or your enterprise, Microsoft Defender for Endpoint, Microsoft 365 Defender, and Microsoft Defender for Identify all do an excellent job of mapping every attack so one can depict how the attacker achieves its end state of dropping the malicious payload. On the left is each step of the attack, which is mapped directly to the MITRE ATT&CK framework. On the right, is Microsoft’s recommendation to prevent the attack. 


 


ATT&CK ChainATT&CK Chain


 


Ransomware and SharePoint Online files – How to Restore (Option 1) 


 


In my lab I will run the same type of attack but will fast forward to right before the files are encrypted. I am also using a long and complex password which my Red Team (me) had no problem at all getting into a machine using various methods. I am currently attempting to get the keys to the kingdom (Domain level access) as would any attacker. As you can see below, the sensitive files in the company are normal and everything is secure until I get ready to deploy an attack to get the SharePoint Admin’s credentials (which I am hoping is just a long and complex password) to deploy the Ransomware payload and encrypt and/or delete the files. 


 


SPO Files Before AttackSPO Files Before Attack


 


 In my lab I will run the same type of attack but will fast forward to right before the files are encrypted. I am also using a long and complex password which my Red Team (me) had no problem at all getting into a machine using various methods. I am currently attempting to get the keys to the kingdom (Domain level access) as would any attacker. As you can see below, the sensitive files in the company are normal and everything is secure until I get ready to deploy an attack to get the SharePoint Admin’s credentials (which I am hoping is just a long and complex password) to deploy the Ransomware payload and encrypt and/or delete the files. 


 


Files Encrypted with PayloadFiles Encrypted with Payload


 


As you can see, the files are encrypted with the .xati extension and without the decryption tool, the sensitive files the company needs are locked. The forensic team opens the “Files Encrypted.txt” to see what the message is. 


 


Ransomware Payment MessageRansomware Payment Message


 


At this point, it is evident that using just passwords for a high privileged account has just caused the company to come under attack by the Ransomware and also not using any type of rights management. An option exists to restore the files without paying a ransom through Microsoft’s restore option. This is not a new feature, but lots of clients are not aware of it or have just migrated to O365 in the cloud. 


 


Using a Site Owner account and a machine that has not been comprised, one can log into the SharePoint library. Click on the upper right on the settings icon and click “Restore this library.” 


  


Restoring the FilesRestoring the Files


 


Under Select a date and time, choose “custom date and time,” and investigate when the files were deleted or encrypted and move back to an earlier time when the files were at an original state. Select the folders/files and select Restore and confirm the rollback to the original and unaltered state. 


 


Confirming the Files and DatesConfirming the Files and Dates


 


During the process it make take some time depending on the size of the library. Once complete, click on return to documents to see if all the files are restored.  


 


Return to DocumentsReturn to Documents


 


 The files are completely restored as seen below. 


 


Files Restored as seen from extensionFiles Restored as seen from extension


 


Ransomware and SharePoint Online files – How to Restore (Option 2) 


 


It’s certainly possible that an attacker could delete/encrypt to the point where the customer cannot restore it themselves. In case you are not able to restore using option 1, you can use this option to get your files back. The backups are not going to be affected by any encryption or deletion done by an attacker and you have 14 days from an attack to get the files back. 14 days is not a huge window of time, but as long as a good copy of the documents existed in the last 14 days you can contact Microsoft support to restore them. Be sure to have the following details: 


 



  • What site collection URL(s) that have been affected by ransomware? 

  • When was the last known time the files were not modified by the ransomware? 


 


SharePoint Online retains backups of all content for 14 additional days beyond actual deletion. If content cannot be restored via the Recycle Bin or Files Restore, an administrator can contact Microsoft Support to request a restore any time inside the 14-day window. Again, as long as the files are not compromised at a point in time restore (within 14 days) an attacker has no way to impact Microsoft’s backups. Microsoft has 14 days of backups through support but can be used to restore if all other methods of restore are not an option. The 14 days’ worth of backups in this option is not accessible to the customer – only through support. So why does Microsoft have this specific option to contact support to get files you are not able to? When ransomware was surfacing more and more years ago, the self-service was created in response to Ransomware and human operated Ransomware like Dharma for customers to retrieve files after an attack. 


 


SharePoint in M365 and SharePoint Server Information Rights Management (IRM) Feature 


 


When you use SharePoint in Microsoft 365 or SharePoint Server, you can protect documents by using the SharePoint information rights management (IRM) feature. This feature lets administrators protect lists or libraries so that when a user checks out a document, the downloaded file is protected so that only authorized people can view and use the file according to the information protection policies that you specify. For example, the file might be read-only, disable the copying of text, prevent saving a local copy, and prevent printing the file. 


 


Word, PowerPoint, Excel, and PDF documents support this SharePoint IRM protection. By default, protection is restricted to the person who downloads the document. You can change this default with a configuration option named Allow group protection, which extends the protection to a group that you specify. For example, you could specify a group that has permission to edit documents in the library so that the same group of users can edit the document outside SharePoint, regardless of which user downloaded the document. Or you could specify a group that is not granted permissions in SharePoint but users in this group need to access the document outside SharePoint. For SharePoint lists and libraries, this protection is always configured by an administrator, never an end user. You set the permissions at the site level, and these permissions, by default, are inherited by any list or library in that site. If you use SharePoint in Microsoft 365, users can also configure their Microsoft OneDrive library for IRM protection. 


 


For more fine-grained control, you can configure a list or library in the site to stop inheriting permissions from its parent. You can then configure IRM permissions at that level (list or library) and they are then referred to as “unique permissions.” However, permissions are always set at the container level; you cannot set permissions on individual files. 


 


The IRM service must first be enabled for SharePoint. Then, you specify IRM permissions for a library. For SharePoint and OneDrive, users can also specify IRM permissions for their OneDrive library. SharePoint does not use rights policy templates, although there are SharePoint configuration settings that you can select that match some settings that you can specify in the templates. 


 


If you use SharePoint Server, you can use this IRM protection by deploying the Azure Rights Management connector. This connector acts as a relay between your on-premises servers and the Rights Management cloud service. For more information, see Deploying the Azure Rights Management connector. 


 


How to Prevent This Attack by Using MFA 


 


Dharma Ransomware and other Ransomware uses malicious documents in phishing emails or links inside a careful crafted phishing emails that will look real to the average user. After establishing access, the success of attacks relied on whether campaign operators managed to gain control over highly privileged domain accounts. To stop this from happening, setting up MFA with every high privileged account at a minimum must be implemented. Furthermore, all accounts should use MFA (as seen in the first step below) or can be set per administrator as I will use a SharePoint Admin in the following steps.  


 


Since a SharePoint admin has access to sensitive files which can be deleted/encrypted if the account is compromised (temporarily), this is a perfect example to start with. A regular user that does not have Admin credentials will not have the ability to carry out a well crafted Ransomware attack and several options under the settings menu will not appear as they are for authenticated Admins only. Therefore, privilege escalation is necessary and how using MFA will prevent this from happening.  


 


Managing security can be difficult with common identity-related attacks like password spray, replay, and phishingand then an organization can be hit with a Ransomware attack. Security defaults make it easier to help protect your organization from these attacks with preconfigured security settings: 



  • Requiring all users to register for Azure Multi-Factor Authentication. 



  • Requiring administrators to perform multi-factor authentication. 

  • Blocking legacy authentication protocols. 

  • Requiring users to perform multi-factor authentication when necessary. 

  • Protecting privileged activities like access to the Azure portal. 


 Those subscriptions have security defaults once turned on which requires all of your users to use MFA with the Microsoft Authenticator appBlocks legacy authentication, and Users have 14 days to register for MFA with the Microsoft Authenticator app from their smart phones, which begins from the first time they sign in after security defaults has been enabled. After 14 days have passed, the user will not be able to sign in until MFA registration is completed. 


 


Sign into Azure AD – https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview 


Select Properties, Manage Security Defaults, and Enable Security Defaults. 


 


Setting up Security Defaults in AADSetting up Security Defaults in AAD


 


Click Save when complete. 


 


Adding a specific user(s) to the SharePoint Admin Role  


 


Select the user in Azure AD on the Users tab or selecting https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/MsGraphUsers 


 


Once the specific user is selected, click on + Add assignments and select SharePoint Administrator under Role. 


 


Adding SharePoint AdminAdding SharePoint Admin


 


Select Setting to add assignment type  


 



  • Eligible assignments require the member of the role to perform an action to use the role. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. 



  • Active assignments do not require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role at all times. 


 


Select Assign and the user will now be assigned as a SharePoint Admin with the neccesary assignment type 


 


Adding AssignmentAdding Assignment


 


User in the correct assignment  


 


Confirming User AssignmentConfirming User Assignment


 


 To enable MFA for a SharePoint Admin who Manages Sensitive Files 


 


To provide an additional level of security for sign-ins, you must use MFA, which uses both a password, which should be strong, and an additional verification method based on: 


 



  • Something you have with you that is not easily duplicated, such as a smart phone. 

  • Something you uniquely and biologically have, such as your fingerprints, face, or other biometric attribute. 


The additional verification method is not employed until after the user’s password has been verified. With MFA, even if a strong user password is compromised, the attacker does not have your smart phone or your fingerprint to complete the sign-in. 


 


Go to Conditional Access in Azure AD – https://portal.azure.com/#blade/Microsoft_AAD_IAM/ConditionalAccessBlade/Policies 


Or navigate In Azure AD and select Security, Conditional Access, and + New Policy.  


  


Name the Policy for the specific SharePoint MFA Policy and select user and groups, Directory roles, and SharePoint Administrator and click select. 


 


Adding SharePoint Admin in PolicyAdding SharePoint Admin in Policy


 


Select Cloud Apps or actions and for Include select All cloud apps and for Exclude select a break glass account 


 


Selecting All Cloud AppsSelecting All Cloud Apps


 


Select Conditions, Client apps, and for configure select yes and leave everything as default and select done. 


 


Configuring Client AppsConfiguring Client Apps


 


For Access Controls select Grant Access and select Require multi-factor authentication and click select.  


 


Turning on MFATurning on MFA


 


For more fine detailed user access control based in the Sessions tab, the configurations can be found here for the correct session controls for your organization. Once complete click the create tab.  


 


SharePoint Admin Signing Into O365 


 


When the SharePoint Admin signs in, they will be prompted with the current password and then prompted to setup MFA. 


 


Enter PasswordEnter Password


 


 User must register for MFA and select next.  


 


Select NextSelect Next


 


The next steps are registering for MFA and using the Microsoft Authenticator App. 


 


Download the Microsoft Auth AppDownload the Microsoft Auth App


 


Scan the QR code with the Microsoft Authenticator App and Select Next. 


 


Scan QR Code with AppScan QR Code with App


 


Once scanned, your phone screen will display the notification 


 


SharePoint Admin Approving AuthSharePoint Admin Approving Auth


 


Inside the Microsoft Authenticator App, a notification will be sent and select Approve. Once that is complete, your machine will show notification approved.  


 


Select NextSelect Next


 


 Select next to secure your account by providing  a phone number for a text or phone call. Select next. 


 


Enter in Phone NumberEnter in Phone Number


 


Place the code into the field and select next. 


 


Placing 6 Digit Code to VerifyPlacing 6 Digit Code to Verify


 


 Select next and you will be logged into  Office 365. Every time you login, you will use MFA to sign into the privileged account as a SharePoint Admin to further protect the account and secure SharePoint documents, folders, files, etc. 


 


Every time the specific user/password is entered into Office 365, a notification will be sent to your phone to then Approve or Deny the pop up from the Microsoft Authenticator App. 


 


Approve Sign in RequestApprove Sign in Request


 


 If approved you will be logged into Office 365. 


 


If someone had sent a phishing email and the SharePoint Admin would have clicked it, in the background the password would be sent to the attacker (assuming proper security controls were not implemented like SafeLinksetc.). Once the attacker has the long and complex password, the attacker would use the credentials to gain higher access, spread malware, and/or conduct a Ransomware attack. All this is stopped as the real SharePoint admin would have received a pop up message on the authenticator app (on the person’s physical mobile device) to approve or deny the request using MFA. If the SharePoint Admin receives this when the person never signed in (out having a drink after a long day onsite), they can quickly deny the login and alert the security team that someone has the password but was unsuccessful at logging into O365 as another personAt this time, the Global Admin can force a reset of the password and see any activity related to successful and unsuccessful logins in Azure AD by the user for the date, time, application, IP Address, location, and Conditional Access applied. More information can be found here 


 


Mitigations 


 



  • Utilize the Microsoft Defender Firewall and your network firewall to prevent RPC and SMB communication among endpoints whenever possible. This limits lateral movement as well as other attack activities. 

  • Secure internet-facing RDP services behind a multi-factor authentication (MFA) gateway. If you do not have an MFA gateway, enable network-level authentication (NLA), and ensure that server machines have strong, randomized local admin passwords. 

  • Turn on tamper protection features to prevent attackers from stopping security services. 

  • Enforce strong, randomized local administrator passwords. Use tools like LAPS. 



  • Disallow macros or allow only macros from trusted locations. See the latest security baselines for Office and Office 365 

  • Turn on AMSI for Office VBA if you have Office 365. 

  • Monitor for clearing of event logs. Windows generates security event ID 1102 when this occurs. 

  • Ensure internet-facing assets have the latest security updates. Audit these assets regularly for suspicious activity. 

  • Determine where highly privileged accounts are logging on and exposing credentials. Monitor and investigate logon events (event ID 4624) for logon type attributes. Highly privileged accounts should not be present on workstations. 



 Detection details 


 


Antivirus 


Microsoft Defender Antivirus detects threat components as the following malware: 



Endpoint detection and response (EDR) 


Alerts with the following titles in the Microsoft Defender Security Center portal can indicate threat activity on your network: 



  • Doppelpaymer ransomware and other variants 


The following alerts might also indicate threat activity associated with this threat. These alerts, however, can be triggered by unrelated threat activity and are not monitored in the status cards provided with this report: 



  • Process execution from an Alternate Data Stream (ADS) 

  • Use of living-off-the-land binary to run malicious code 



  • Windows Sysinternals tool renamed 

  • Attempt to hide use of dual-purpose tool 

  • Malicious credential theft tool execution detected 

  • Suspected credential theft activity 

  • PowerShell made a suspicious network connection 



  • Possible Antimalware Scan Interface (AMSI) tampering 

  • Suspicious PowerShell command line 

  • Suspicious service registration 

  • Network connection to a risky host 

  • Suspicious WMI (Windows Management Instrumentation) process creation 



  • Suspicious sequence of exploration activities 

  • User account created under suspicious circumstances 

  • New group added suspiciously 

  • New local admin added using Net commands 


Attack surface reduction rules 


These rules can block or audit activity associated with various stages of this threat: 



  • Use advanced protection against ransomware 

  • Block process creations originating from PSExec and WMI commands 

  • Block persistence through WMI event subscription 



  • Block credential stealing from the Windows local security authority subsystem (lsass.exe) 


 


Conclusion 


 
Thanks for taking the time to read this long blog and I hope you had fun reading about a typical day in the life of a Cybersecurity Customer Engineer and assisting clients in securing cloud infrastructure and proactive Ransomware measures. Ransomware is nothing to play around with as the attackers are always one step ahead. By not being complacent, not using just passwords, using MFA to secure privileged accounts, and using a Microsoft 365 security defense in depth approach, your endpoints will be more secure than you will ever know. Various methods are in place to restore your SharePoint online library from a breach if an attacker does get inside. Using MFA to secure all admin’s credentials and accounts cannot be stressed enough. Always test all controls stated in this paper to make sure they are properly configured, tested, secured, tested again, and then finally implemented after more testing.  Also for more information on going passwordless, visit aka.ms/gopasswordless.


 


Hope to see you in my next blog and always protect your endpoints!  


Thanks for reading and have a great Cybersecurity day!  


 


Follow my Microsoft Security Blogs: http://aka.ms/JohnBarbare  and also on LinkedIn.  

APEX Domains to Azure Functions in 3 Ways

APEX Domains to Azure Functions in 3 Ways

This article is contributed. See the original author and article here.

Throughout this series, I’m going to show how an Azure Functions instance can map APEX domains, add an SSL certificate and update its public inbound IP address to DNS.


 



  • APEX Domains to Azure Functions in 3 Ways

  • Let’s Encrypt SSL Certificate to Azure Functions

  • Updating DNS A Record for Azure Functions Automatically

  • Deploying Azure Functions via GitHub Actions without Publish Profile


 


Let’s say there is an Azure Functions instance. One of your customers wants to add a custom domain to the Function app. As long as the custom domain is a sub-domain type like api.contoso.com, it shouldn’t be an issue because CNAME mapping is supported out-of-the-box. But what if the customer wants to add an APEX domain?


 



Both APEX domain and root domain point to the same thing like contoso.com.



 



 


Adding the root domain through Azure Portal can’t be accomplished with the message above. To add the APEX domain to Azure Functions instance, it requires an A record that needs a public IP address. But Azure Functions instance doesn’t support it via the portal.


 


Should we give it up now? Well, not really.


 



 


As always, there’s a way to get around. Throughout this post, I’m going to show how to map the APEX domain to an Azure Functions instance in three different ways.


 


Verifying Domain Ownership


 


First of all, you need to verify the domain ownership at the Custom Domains blade of the Function app instance.


 



 



  • Get the Custom Domain Verification ID at the picture above.

  • Add the TXT record, asuid.contoso.com, to your DNS server.

  • Add the A record with the IP address to your DNS server.


 


The verification process can be done via the portal.


 



If you want to run the verification with Azure CLI, please have a look at the link.



 



 


Now, you’ve verified the domain ownership. But you haven’t still yet made the APEX domain mapping.


 


1. Through Azure PowerShell


 


If you can’t make it through Azure Portal, Azure PowerShell will be one alternative. Use the Set-AzWebApp cmdlet to map your APEX domain.


 


    $resourceGroupName = “[RESOURCE_GROUP_NAME]”
$functionAppName = “[FUNCTION_APP_NAME]”
$domainName = “contoso.com”
Set-AzWebApp `
-ResourceGroupName $resourceGroupName `
-Name $functionAppName `
-HostNames @( $domainName, “$functionAppName.azurewebsites.net” )

 


When you use Azure PowerShell, you MUST make sure one thing. The -HostNames parameter specified above MUST include the existing domain names (line #7). Otherwise, all existing domains will be removed, and you will get the warning like below:


 



 


If you add all custom domains including the default domain name like *.azurewebsites.net, you will be able to see the screen below:


 



 


2. Through Azure CLI


 


If you prefer to using Azure CLI, use the command, az functionapp config hostname add.


 


    az functionapp config hostname add
-g [RESOURCE_GROUP_NAME]
-n [FUNCTIONAPP_NAME]
–hostname “contoso.com”

 


Unlike Azure PowerShell, you can add one hostname at a time and won’t lose the ones already added.


 


3. Through ARM Template


 


You can also use ARM Template instead of command-line commands. Here’s the bicep code for it (line #5-8).


 


    resource fncapp ‘Microsoft.Web/sites@2020-06-01’ = {
name: functionApp.name

}

resource fncappHostname ‘Microsoft.Web/sites/hostNameBindings@2020-06-01’ = {
name: ‘${fncapp.name}/contoso.com’
location: fncapp.location
}


 


When the bicep file is compiled to ARM Template, it looks like:


 


    {

“resources”: [

{
“type”: “Microsoft.Web/sites/hostNameBindings”,
“apiVersion”: “2020-06-01”,
“name”: “[format(‘{0}/{1}’, variables(‘functionApp’).name, ‘contoso.com’)]”,
“location”: “[variables(‘functionApp’).location]”,
“dependsOn”: [
“[resourceId(‘Microsoft.Web/sites’, variables(‘functionApp’).name)]”
]
},

]

}

 




 


So far, we’ve used three different ways to map an APEX domain to Azure Functions instance. Generally speaking, it’s rare to map a custom domain to an Azure Functions instance. It’s even rarer to map the APEX domain. Therefore, the Azure Portal doesn’t support this feature. However, as we already saw, we can use either Azure PowerShell or Azure CLI, or ARM templates to add the root domain. I hope this post helps if one of your clients’ requests is the one described in this post.


 


In the next post, I’ll discuss how to bind a Let’s Encrypt generated SSL certificate to the custom APEX domain on Azure Function app.


 


This article was originally published on Dev Kimchi.

Meet Jenny Lay-Flurrie, Chief Accessibility Officer at Microsoft

Meet Jenny Lay-Flurrie, Chief Accessibility Officer at Microsoft

When it comes to assistive technologies the person leading the way for Microsoft is their Chief Accessibility Officer Jenny Lay-Flurrie. She’s from Birmingham, England, is profoundly deaf, works in Seattle, Washington, USA, and is passionate about the importance of putting inclusion at the heart of corporate culture. This is no small undertaking as it requires a paradigm shift in corporate thinking. But Jenny has never shied away from a fight. This October 1, 2020 interview may be giving you a first look at this amazing woman. Here’s a more in-depth look into her: https://news.microsoft.com/stories/people/jenny-lay-flurrie.html.