by Contributed | Dec 8, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
With Azure Sentinel you can receive all sorts of security telemetry, events, alerts, and incidents from many different and unique sources. Those sources can be firewall logs, security events, audit logs from identity and cloud platforms. In addition, you can create digital trip wires and send that data to Azure Sentinel. Ross Bevington first explained this concept for Azure Sentinel in “Creating digital tripwires with custom threat intelligence feeds for Azure Sentinel”. Today you can walkthrough and expand your threat detection capabilities in Azure Sentinel using Honey Tokens or in this case Canarytokens.
What is a Honey Token? A honey token is a digital artifact like a Word Document, Windows Folder, or JavaScript file that when opened or accessed will act as a digital trip wire and alert you to being used. When used the honey token might make a GET HTTP call to a public facing URL or IP. The so that an attacker would want to investigate and exfiltrate the artifact but also ensuring you reduce false positives from normal users. One way to do this is creating a separate folder from the normal directory structure. This could take the form of naming a Word document High Potential Accounts.docx. And then placing it in a Sales share but inside two more nested directories.
The other key is to make the digital artifact searchable or easily found, you want the attacker to see the token and access it. You can also sprinkle these honey tokens through out the network and in different use cases beyond. The key here is ensuring that the honey token is in a visible location and can directory searched upon by normal user credentials.
As with most things a balanced approach should be taken with honey token names and placement. Think through where in the cyber kill chain you want the digital trip wire, and ways to make the token enticing to an attacker but will also reduce false positives from normal employees and routines.
Honey Tokens are not a new concept but the following approach described to use a service called Canarytokens is a bit newer. Canarytokens is a free service provided by Thinkist that generates different types of tokens and provides the back end trip wire logging and recording. The service allows you to focus on the naming and placement specific to your industry and buisness rather then building a Public facing URL that logs and collects the tokens being tripped. Thinkist also has a paid service as well that includes many useful features.
In the below example you will walk through creating a free Canarytoken (honey token as described) but through a Canary service and use it to update Azure Sentinel when it is triggered.
To begin with you can deploy a Logic App Ingest-CanaryTokens here. The Logic App will act a listener and will provide a URL you can use in the Canarytoken generation.
To Deploy the Logic App fill in your Azure Sentinel Workspace ID and Key.

Once deployed go to the Logic App and in the Overview click on the blue link: See trigger history

Copy the URL from the following field: Callback url [POST]

With this LogicApp and a Callback listening URL you can now generate a Canarytoken.
To create the Canarytoken go to the following website: Canarytokens
- Choose Microsoft Word Document
- Fill out your email address and enter a <SPACE> and paste the Logic App Callback URL
- In the final field enter a description, – see below
You will use description to also host your Entities for Azure Sentinel. You can use a comma as a separator between the entity information you want to capture upon tripping the wire.
Be sure to be descriptive to what ServerShare or OneDrive the Canarytoken will be placed. Because you will generate several different tokens the descriptive notes will come in the alert that is triggered ensuring you will be able to dive further on that Server or Service to investigate further activity of the attacker.
In this example you could use:
Name |
Descriptor |
Azure Sentinel parsed column name |
|
Computername |
The Computername where Canarytoken is hosted |
CanaryHost |
|
Public IP |
the public ip of internet access where token is hosted. Can be used to correlate if token is launched within data center or known public ip of server
|
CanaryPublicIP
|
|
Private IP |
Private ip of computer where token is hosted could be used to correlate additional logs in Firewalls and other IP based logs
|
CanaryPrivateIP |
|
Share Path |
The share path this Canarytoken is hosted at, helps indicate where a scan or data was compromised at.
|
CanaryShare |
|
Description |
helps provide addition context for SOC Analyst about purpose of Canarytoken and it’s placement
|
CanaryDescription |
|
*EXAMPLE:
FS01,42.27.91.181,10.0.3.4,T:departmentssaleshipospecials,token placed on FS01 available to all corporate employees and vendors
4. Once Completed click Create my Canarytoken

Check out the further use cases for the Canarytokens to be placed. Go ahead and Download your MS Word file.

Notice the file name that downloads is the Canarytoken id itself. This word document name really is not that compelling for an attacker to discover, exfiltrate, and investigate. You should rename the file immediately to something more compelling.
You want to grab the attention of the attacker searching for valuable information. Remember the overall arching goal for most attackers is obtaining key corporate data. The Canarytoken is helping alert to the violation of confidentiality, integrity and availability of key corporate data. Names like Project Moonshot placed in NextGeneration folder could help entice. Document name like High Potential Account List in a Sales team folder may also do the trick. Be creative to your industry and business as to what data could be valuable.
In this example we used White Glove Customer Accounts.docx

To make the document seem more legitimate you can use a website Mockaroo – Random Data Generator and API Mocking Tool | JSON / CSV / SQL / Excel to generate random and fictious data easily. Here you can create what appears to be a customer account list with account numbers and email addresses.

Once you fill out the fields you want go ahead and download a CSV sample by clicking Download Data green button. Open with Excel and be sure to manipulate the Rows and Columns to make it nicely formatted. With the table looking presentable copy the content in Excel and Open the Word Document Canarytoken and paste the content in and save the document.

You now have a Canarytoken that looks authentic and hopefully will not arouse the suspicion of the attacker but will be visible and entice them greatly to exfiltrate and open it. Continue to examine Mockaroo and the data you can generate it is a very easy to use and helpful tool.
Now find a home for the word document in a File Share on a File Server, or as an email attachment in your executives mailbox – again think back to the description you gave it and follow that to where it is placed so in the worst case you are attacked this can tip you off to where on your network to focus your investigation further in Azure Sentinel’s logs and events you are collecting.
To test this open the Word Document on your computer or on another server or computer with word. When Microsoft Word opens a .1 by .1 header and footer image with a open URL will execute a GET HTTP call to the appropriate CanaryToken endpoint you created earlier. Once this occurs you will receive an email with details like below.

Be sure to also check out the More info on this token here link, which will provide more geo information on the public ip that opened the document and also if it came off a known Tor browser or not.

You can also download a JSON or CSV file of the detailed information found in the Incidents generated when the Canarytoken was opened.
In addition to the email the Logic App listener will be invoked which will take the Incident Data and enrich it a little further and send it to Azure Sentinel into a custom logs table named CanaryTokens_CL.

Some of those enriched fields include geo information on the public ip address that triggered the Canarytoken. There is also parsed information from the memo field to include specifics around the Canarytokens placement in your environment and objectives and some logic to tell you if the canary was triggered on host. Finally string fields for URLs have been populated for you to review the management and history of the Canarytoken if you need to pivot from Azure Sentinel to the Canarytoken specifically while investigating.
You can now use Azure Sentinel to raise a High Priority incident and work the incident with case management. You can also correlate logs and data with other Azure Sentinel data collected further helping you investigate the incident.
An example Scheduled query rule in Azure Sentinel you can use following along this walkthrough. Step by step instructions Here
id: 27dda424-1dbe-4236-9dd5-c484b23111a5
name: Canarytoken Triggered
description: |
'A Canarytoken has been triggered in your enviroment, this may be an early sign of attacker intent and activity,
please follow up with Azure Sentinel logs and incidents accordingly along with the Server this Canarytoken was hosted on.
Reference: https://blog.thinkst.com/p/canarytokensorg-quick-free-detection.html'
severity: High
requiredDataConnectors:
- connectorId: Custom
dataTypes:
- CanaryTokens_CL
queryFrequency: 15m
queryPeriod: 15m
triggerOperator: gt
triggerThreshold: 0
tactics:
- Discovery
- Collection
- Exfiltration
relevantTechniques:
query: |
CanaryTokens_CL
| extend Canarydata = parse_csv(memo_s)
| extend CanaryHost = tostring(Canarydata[0]), CanaryPublicIP = tostring(Canarydata[1]), CanaryPrivateIP = tostring(Canarydata[2]), CanaryShare = tostring(Canarydata[3]), CanaryDescription = tostring(Canarydata[4])
| extend CanaryExcutedonHost = iif(CanaryPublicIP == src_ip_s, true, false)
| extend timestamp = TimeGenerated, IPCustomEntity = src_ip_s //,AccountCustomEntity = user_s, HostCustomEntity = computer_s
entityMappings:
- entityType: IP
fieldMappings:
- identifier: Address
columnName: IPCustomEntity
Once you have created the rule, open the Canarytoken word document one more time to generate an alert.
Within 15 minutes or so a new Azure Sentinel Incident for the Canarytoken being trigged will appear, your SOC can now use the Logs fed into Azure Sentinel to correlate and investigate further.

In addition the Investigate Graph is also populated with the Public IP Address of where this was triggered.

Please tweak the Custom Entities to your liking. Another way is to point where the Canarytoken was placed to bolster the pivot of the Investigation graph. The above alert sample parses the memo field you added early with commas when generating the initial Canarytoken.

In this article you learned about honey tokens and a Canary service and how to use Canarytokens in your environment and integrate the enriched alerts into Azure Sentinel raising awareness of a potential attacker and data exfiltration that may have occurred.
You have just scratched the surface with the concept of honey tokens. If you are interested in learning more in depth I highly recommend Chris Sander’s book Intrusion Detection Honeypots which is a excellent resource.
Special thanks to:
@Ofer Shezaf for reviewing this post
@Chris Sanders for inspiration and information on the topic of Honey Tokens
by Contributed | Dec 8, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Initial Update: Tuesday, 08 December 2020 13:03 UTC
We are aware of issues within Log Analytics and are actively investigating. Some customers may experience data access issues in Australia South East region.
- Work Around: None
- Next Update: Before 12/08 17:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.
-Sandeep
by Contributed | Dec 7, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
The SAP on Azure group is excited announce the preview for SAP on Azure Deployment Automation Framework. This introduces an extensible framework that will modularly address the complexities of running SAP on Azure.
Some of the largest enterprises in the world currently run their SAP solutions on Microsoft Azure. Since these SAP applications are mission critical, even a brief delay or disruption of service can have a significant business impact on an organization.
Today the journey to deploying an SAP system is a manual process. This can quickly become costly due to divergence in resources and configuration, or discrepancies introduced through human error. To reduce this impact, improve consistency, and reduce lead times, we are approaching the SAP deployment as an opportunity to define the SAP Infrastructure as Code (IaC) and capture the configuration activities as Configuration as Code (CaC).
To help our customers effectively deploy their infrastructure for SAP on Azure repeatably and consistently, we are providing the IaC (Infrastructure as Code) and CaC (Configuration as Code) code repository that will utilize the industry leading offerings such as, Terraform and Ansible respectively.
Key Capabilities of SAP on Azure Deployment Automation Framework
- Deploy multiple SAP SIDs consistently
- Securely managed keys with Key Vault
- Deploy and configure for High Availability (HA)
- IaaS – Infrastructure as Code (IaC)
- Logically partitionable by environment, region, and virtual network
- Availability Zone support where appropriate
- SAP HANA and AnyDB configuration support
- Distributed application tier deployment
- Configuration – Configuration as Code (CaC)
- SAP HANA database install
- Pacemaker configuration for SAP HANA with HSR
Benefits of SAP on Azure Deployment Automation Framework
Reviewable code: delivering the tools as IaC and CaC via open source, allows organizations and teams to preview the definition of what Azure resources will be created and how the VM OS will be configured. In the world of DevOps, this allows the code to be approved prior to execution.
Deployment Consistency: IaC and CaC allow, not only the IaaS to be deployed consistently and repeatably, but also define the post deployment system administration activities to follow a procedural set of steps that will be consistent and repeatable. Both tools function idempotently, which becomes part of drift detection and correction.
Configurable: The IaC and CaC modules provide standard functionality. However, the functionality can easily be configured through inputted parameters. Changes delivered over time will enhance and extend functionality. Some examples of supported configurations include: Number of application servers, high-availability in the database and/or application tier, enabling/disabling of either database or application tier, overriding default naming standard, and the ability to bring some of your own resources in certain instances.
Drift detection and correction; With tools that apply the IaC and CaC idempotently, we can detect when the deployed resources have drifted from their state as defined by the tools. When detected, a choice may be made to apply the desired state to resolve any drift.
Strategy
We have chosen to take an open source approach to establish a framework of Infrastructure as Code (IaC) and Configuration as Code (CaC). This framework provides the structure that allows an E2E workflow to be executed on by industry leading automation tools and is easily extendable.
- Terraform – is the swiss army knife of IaC tools. It is not only idempotent, Terraform is completely cloud-agnostic and helps you tackle large infrastructure for complex distributed applications. Terraform automation is orchestrated in varying degrees with the focus on the core plan/apply cycle.
- Ansible – Provides a “radically simple” IT automation engine. It is designed for multi-tier deployments and uses no agents. Ansible is a strong fit for configuration management, application deployment, and intra-service orchestration by describing how all the systems inter-relate. Ansible is one of the more flexible CaC tools on the market right now.
At this stage of the release, the IaC and CaC are offered as BYOO (Bring Your Own Orchestration). This means that you will provide the governance around the execution of the automation. We provide a framework workflow that can function out of the box, with a defined set of manual executions. It can also fit in to a more mature, customer provided, Orchestration environment.
Vision
The SAP on Azure Automation Framework over time will automate many more tasks than just deployment. We plan to continuously improve this framework to meet all your deployment automation needs and expand support for more infrastructure and post-deployment configurations. Stay tuned for updates.
Pricing and Availability
The tools being developed are meant to accelerate the customers adoption of Azure for SAP deployments. As such, these tools are offered free of charge.
Learn more
To learn more about the product, check out the GitHub repository and documentation in the preview branch at: https://github.com/Azure/sap-hana/tree/beta/v2.3
To get started, we have provided some bootstrapping instructions and a self-paced workshop to deploy the IaaS for a 1 or more SAP system deployments.
Feedback
We plan to continuously improve this framework to meet all your deployment automation needs. We welcome all feedback and can be reached at: sap-hana@microsoft.com
by Contributed | Dec 7, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
I was helping a friend earlier today with their Azure Synapse Studio CI / CD integration. They had followed our Docs page Source control in Azure Synapse Studio and then they shared errors they were seeing in their release pipeline during deployment.
We took a step back to discuss what they wanted to do, and it looked like they were too far in the weeds for ADO. So I walked through creating an Azure DevOps Project, connecting Git to my Azure Synapse Studio, and then creating a branch and pushing some changes. We’ll push changes in a follow up blog post. Today we cover the basics.
First let’s navigate to Projects – Home (azure.com). We will create a New Project and title it Azure Synapse Studio CI CD. I’m going to mark this repo private because it’s just for us.

Now I will click on the Repos menu.

Next I will go to the bottom of the page. I want to select Initialize main branch with a README or gitignore. I will click Initialize.

At this point I have a Repo that is initialized.

Now we can connect this to our Azure Synapse Studio. Let us travel over to https://web.azuresynapse.net/ and log into our Azure Synapse Studio. After we login we need to navigate to the Manage screen. If you are not on the Git configuration page, navigate there.

Next we want to click on Set up code repository. You can select Azure DevOps Git or GitHub. For this blog we will be selecting Azure DevOps Git. Then select your organizations Azure Active Directory tenant. *a quick side note, make sure the AAD account you are using to connect to Azure DevOps is the same account that has permissions to your Azure Synapse Studio workspace.
Then click Continue.

Select the Azure DevOps Account that our organization is using. The Project and Git repository name are the same, and are the Project Name we created earlier.
My collaboration branch is main, my Publish branch is workspace_publish, my Root folder is the default, I have checked import existing resources to repository.
As this is my initial commit I want to commit this to my main branch.
Then I click Apply.
*Another note: your company will have a DevOps environment, and specific rules on how you want things to connect. If I’m doing anything that makes you scream from a developer philosophy, please find me on Twitter under BuckWoody_MSFT …. also don’t tell Buck I did this …. I’m not Buck.

At this point in time your Azure DevOps Git Rep should be connected.

If we go to our Azure DevOps Repo we should see that it is populated with objects from our Azure Synapse Studio.

Back in Azure Synapse Studio, we can navigate to the develop pane, create a new branch to ensure any changes I make will not be automatically deployed against my main version of Azure Synapse Studio.

This is what we will tackle next time.
by Contributed | Dec 7, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
After the announcement of the integration of Azure Data Explorer into the recently launched Azure Purview service, this blog post shows how to use ADX as a data source in Purview and highlights the new features in a few use cases that can be helpful in the context of ADX.
Read the announcement blog post about our integration with Purview here.
Setup: How to connect ADX as a data source in Azure Purview
Azure Data Explorer is available as a data source to scan within Azure Purview Studio. Just register a new source and choose “Azure Data Explorer (Kusto)” and the according cluster connection information.

In order to connect to an ADX cluster you need to provide a service principal with “AllDatabaseViewer” rights on the target cluster/databases to Azure Purview through an Azure KeyVault.
After that, register ADX as a data source in Azure Purview Studio. Go through the process of managing the credentials, registering a new data source and configuring the first scan, as described here.
Then select all the databases that should be included in this specific scan. For example you can set up a full scan for all databases that should be done monthly or weekly, and set up another scan on specific databases that runs daily to keep the information in the data catalog up-to-date (full and incremental scans).
Next you can specify which rule set you want to apply for this scan. There is a system default for every data source, which includes all default classifications.

The power of custom classifications and scans
You can create your own scan rule sets to include only the scan rules that are relevant for your data or include your own custom rules that you defined in Purview.
In the context of Azure Data Explorer, this can be very useful for a variety of use-cases:
Custom rules can be helpful for example if you want to identify custom part numbers that you might ingest with your IoT Telemetry or Log data, or other patterns that might help you identify certain attributes in tables specific for your business domain. These classifications will then be attributes of the tables/columns in Purview and users will be able to search for example for tables containing data about specific device families, product lines or production processes in your data catalogue.
Classifications can also be applied manually after a scan directly within Azure Purview on the relevant data assets, like on a table or column. For example you could highlight specific columns in a dataset that you know is used to measure customer interaction, when is a feature used and how long does it take a customer to get there. In combination with the Business Glossary in Purview these additional attributes can significantly improve the search and discovery experience for many user groups within a company. Business analysts can leverage this customer interaction data and see if there’s a correlation to any metric they might use.
If you give good thought about which classifications and business terms might be useful, you can make your IoT-, factory floor and device telemetry data much more accessible, democratizing access to these data that historically are often siloed within manufacturing systems.
You can always look at an overview of the recent scans in the run history:

Browsing the data catalog: ADX data assets
After the scan(s) have finished, you can start browsing the newly discovered data assets. You can do that either by directly searching for a specific term (part of a table name, classification, etc.) or by clicking “Browse assets” in the main menu.
In the case of ADX, an overview of the registered data assets can look something like this:

Looking at an ADX table for example we get detailed information as to which database it belongs to and about the cluster this database is running on. We also see when it was last changed, we can add a description to the table and see/add classifications for easier discovery, as well as some ADX specific properties like a potential folder or the docstring.
Also as mentioned you can add descriptions as well as classifications and associate terms from the business glossary with every ADX data asset, visible on the bottom right here:

The scans also pick up the table schema, showing all columns and their respective data type. And while giving the data consumers in your company the ability to discover data easily is very important, here you can also add a contact person, an owner, that people can talk to to learn more about the data asset, how to get access and how to use it.
Visualizing the data flows – data lineage information
A very powerful piece of information is located in the “Lineage” tab. This overview shows you the data flow between the assets in your data catalog. In the context of ADX this currently means that every data movement that you defined using Azure Data Factory involving an ADX table will be visualized in this tab.

In addition to the automatically inferred lineage information from Azure Data Factory, we saw that many customers also use custom scripts for data ingestion, or Jupyter notebooks on a Spark cluster, or they ingest data programmatically using one of the ADX SDKs provided. In this case, to make these data flows transparent as well, you can use the well documented Apache Atlas REST API to create custom objects within Azure Purview.
You can find all the details about using the Azure Purview REST API here, including a sample postman collection to get you started.
An example: Visualize custom data flows within Azure Purview lineage
One feature within ADX that many customers use very frequently is the concept of Update Policies. When using them, you essentially chain two or more tables together and transform data between them in some form. These data transformations can be linear, e.g. from Table A -> Table B -> Table C, or they can “fan out”, like filtering data per device family, e.g. from Table A -> Table B, as well as Table A -> Table C.
What that means is that update policies might be a very good candidate for visualization in the lineage tab. In order to achieve that, we use the Atlas API to create a new “Process” entity in Purview called “ADX Update Policy” with all the attributes it might have within ADX.
REST API Call (POST) against /api/atlas/v2/types/typedefs of our Purview instance:
{
"entityDefs" : [
{
"superTypes" : [ "Process" ],
"category" : "ENTITY",
"name" : "adx_update_policy",
"description" : "a type definition for azure data explorer update policies",
"typeVersion" : "1.0",
"attributeDefs" : [
{
"name" : "IsEnabled",
"typeName" : "string",
"isOptional" : true,
"cardinality" : "SINGLE",
"valuesMinCount" : 1,
"valuesMaxCount" : 1,
"isUnique" : false,
"isIndexable" : false
},
{<additional attributes..>}
]
}
]
}
After the creation of this asset, all we need to do to use it in the lineage tab, is to link two tables together using our newly created “Update Policy” object. Now in order to do that, we first need to fetch the GUIDs of the according ADX tables from Azure Purview, so that the tool is actually able to uniquely identify them. For the sake of this blog post we assume that we looked them up manually, but you can of course also discover them via REST API Call (hint: /api/atlas/v2/search/advanced)
.
After we have all the information, the body of the API call could look like this:
REST API Call (POST) against /api/atlas/v2/entity/bulk of our Purview instance:
{
"entities": [
{
"typeName": "adx_update_policy",
"createdBy": "admin",
"attributes": {
"qualifiedName": "adx_update_policy",
"uri": "adx_update_policy",
"name": "adx_update_policy",
"description": "transforms data between source and target table",
"IsEnabled": "true",
"Query": "KQLTransformationQuery()",
"IsTransactional": "false",
"PropagateIngestionProperties": "false",
"inputs": [
{
"guid": "<Purview ID of the source table>",
"typeName": "azure_data_explorer_table"
}
],
"outputs": [
{
"guid": "<Purview ID of the target table>",
"typeName": "azure_data_explorer_table"
}
]
}
}
]
}
The result then can look something like this:

Of course these are only basic examples and they involve some scripting as well as orchestration of API calls and other sorts of automation, but you can be sure that we are hard at work with the Purview team to extend the automated lineage information to more data ingestion and analytics scenarios relevant for ADX, stay tuned for more.
by Contributed | Dec 7, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
We are excited to announce the integration of Azure Data Explorer with Azure Purview today.
The new ADX capabilities will create more transparency on how data is delivered into your data lake and how this data is structured.
This data governance tool is built for data consumers, such as analysts, business users, data scientists, etc. It’s built to help you to discover your data source, data pipeline, data structure, how to access the data, and if it is classified (for example if there are special privacy regulations that need to be fulfilled in order to work with that data).

Here are some examples of key Azure Data Explorer scenarios leveraging the built-in custom rule functionality of Azure Purview:
- Discover data flow for each data component
- Leverage Azure Purview metadata, lineage and custom classifications/rules
- Easily answer these questions about my data structure “Does our data contain PII data?” “Is my PII data processed and isolated properly?”
- Leverage Azure Purview classifications, lineage
- Confidently use all relevant data sources for dashboard and queries
- Leverage Azure Purview upstream lineage, classifications/business glossary

The Azure Data Explorer and Azure Purview product teams have worked together from the beginning. The first version of the connector was released in private preview and has provided valuable insights from the first set of customers. Today, based on that feedback, we are thrilled to provide a native connector for ADX that’s ready for public preview.
Read more on how to register and scan Azure Data Explorer
Read more on Azure Data Explorer integration into Azure Purview
Recent Comments