by Contributed | Nov 17, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Thanks to Preeti Krishna and Alp Babayigit for the great help.
We have published several Blog posts on how Azure Sentinel can be used Side-by-Side with 3rd Party SIEM tools, leveraging cloud-native SIEM and SOAR capabilities to forward enriched alerts.
Today many enterprises consume more and more cloud services, there is a huge requirement for cloud-native SIEM, this is where Azure Sentinel comes in play and has following advantages:
- Easy collection from cloud sources
- Effortless infinite scale
- Integrated automation capabilities
- Continually maintained cloud and on-premises use cases enhanced with Microsoft TI (Threat Intelligence) and ML (Machine Learning)
- Github community
- Microsoft research and ML capabilities
- Avoid sending cloud telemetry downstream (send cloud data to on-premise SIEM)
There are several best practice integration options available how to operate Azure Sentinel in Side-by-Side.
|
Alerts
|
Events
|
Upstream to sentinel
|
CEF
Logstash
Logic Apps
API
|
CEF
Logstash
API
|
Downstream from Sentinel
|
Security Graph Security API PowerShell
Logic Apps
API
|
API
PowerShell
|
In this Blog post we want to focus more on how Azure Sentinel can consume security telemetry data directly from a 3rd Party SIEM like Splunk.
Why do we want to share this scenario? For some scenarios it makes sense to use data from 3rd Party SIEMs for correlation with available data sources in Azure Sentinel, also Sentinel can be used as single pane of glass to centralize all incidents (generated by different SIEM solutions) and finally you will probably have to deliver the side-by-side for a while until your security team will be more comfortable working within the new SIEM (Azure Sentinel).
In this diagram, we show how Splunk data can be correlated into Azure Sentinel providing a consolidated SIEM view

Scenario description:
When you add data to Splunk, the Splunk indexer processes it and stores it in a designated index (either, by default, in the main index or in the one that you identify). Searching in Splunk involves using the indexed data for the purpose of creating metrics, dashboards and alerts.
Let’s assume that your security team wants to collect data from Splunk platform to use Azure Sentinel as their centralized SIEM. In order to implement this scenario, we can rely on different options but the one I was thinking about is to rely on data stored in Splunk index then create a scheduled custom alert to push this data to the Azure Sentinel API.
In order to send data from Splunk to Azure Sentinel, my idea was to use the HTTP Data Collector API, more information can be found here. You can use the HTTP Data Collector API to send log data to a Log Analytics workspace from any client that can call a REST API.
All data in the Log Analytics workspace is stored as a record with a particular record type. You format your data to send to the HTTP Data Collector API as multiple records in JSON. When you submit the data, an individual record is created in the repository for each record in the request payload.
Based on Splunk Add-on Builder here, I created an add-on which trigger an action based on the alert in Splunk. You can use Alert actions to define third-party integrations (like Azure Sentinel) or add custom functionality. Splunk Add-on Builder uses Python code to create your alert action, here is the code I used within the Add-on: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api#python-3-sample
To make this simple I have created an Add-on for you to use. You need just to install it in your Splunk platform.
Let’s start the configuration!
Preparation & Use
The following tasks describe the necessary preparation and configurations steps.
- Onboarding of Splunk instance (latest release), can be found here
- Get the Log Analytics workspace parameters: Workspace ID and Primary Key from here
- Install the Azure Sentinel App for Splunk: can be found here
Onboard Azure Sentinel
Onboarding of Azure Sentinel is not part of this blog post, however required guidance can be found here.
Add-on Installation in Splunk Enterprise
In Splunk home screen, on the left side sidebar, click “+ Find More Apps” in the apps list, or click the gear icon next to Apps then select Browse more apps.

Search for Azure Sentinel in the text box, find the Azure Sentinel Add-On for Splunk and click Install.
After the add-on is installed reboot of Splunk is required, click Restart Now.
Configure the Azure Sentinel add-on for Splunk
Refer to Define RealTime Alerts documentation to set up Splunk alerts to send logs to Azure Sentinel. To validate the integration, the audit index is used as an example, for an “_audit”- this repository stores events from the file system change monitor, auditing, and all user search history. You can query the data by using index=”_audit” in the search field as illustrated below.

Then use a scheduled or real-time alert to monitor events or event patterns as they happen. You can create real-time alerts with per-result triggering or rolling time window triggering. Real-time alerts can be costly in terms of computing resources, so consider using a scheduled alert, when possible.
Set up alert actions, which can help you respond to triggered alerts. You can enable one or more alert actions. Select “Send to Azure Sentinel” action, which appears after you install the Azure-Sentinel add-on as shown in the diagram below.

Fill in the required parameters as shown in the diagram below:
- Customer_id: Azure Sentinel Log Analytics Workspace ID
- Shared_key: Azure Sentinel Log Analytics Primary Key
- Log_Type: Azure Sentinel custom log name
Note: These parameters are required and will be used by the application to send data to Azure Sentinel through the HTTP Data Collector API.


View Splunk Data in Azure Sentinel
The logs will go to a custom Azure Sentinel table called ‘Splunk_Audit_Events_CL’ as shown below. The table name aligns with the log name provided in the Figure 4 above. It can take few minutes for events to be available.

You can query the data in Azure Sentinel using Kusto Query Language (KQL) as shown below.
Splunk_Audit_Events_CL | summarize count() by user_s, action_s | render barchart

As mentioned at the beginning of this blog, Azure Sentinel can be used as single pane of glass to centralize all incidents (generated by different SIEM solutions).
When a correlation search included in the Splunk Enterprise Security or added by a user, identifies an event or pattern of events, it creates an incident called notable event. Correlation searches filter the IT security data and correlate across events to identify a particular type of incident (or pattern of events) and then create notable events.
Correlation searches run at regular intervals (for example, every hour) or continuously in real-time and search events for a particular pattern or type of activity. The notable event is stored in a dedicated notable index. You can import all notable events into Azure Sentinel using the same procedure described above.

The results will be added to a custom Azure Sentinel table called ‘Splunk_Notable_Events_CL’ as shown below.

You can easily query Splunk incidents in Azure Sentinel:
Splunk_Notable_Events_CL
| extend Origin_Time= extract(“([0-9]{2}/[0-9]{2}/[0-9]{4} [0-9]{2}:[0-9]{2}:[0-9]{2})”, 0, orig_raw_s )
| project TimeGenerated=Origin_Time , search_name_s, threat_description_s, risk_object_type_s

Splunk SPL to KQL
As mentioned above, you will probably have to deliver the side-by-side for a while until your security team will be more comfortable working within the new SIEM (Azure Sentinel). One challenge is how migrate rules and searches from Splunk to Azure Sentinel.
Azure Sentinel is using Kusto Query Language (KQL) to query the data. Splunk is using search-processing-language (SPL) for that.
You can consider this project: https://uncoder.io to transform the query!

Uncoder.io is SOC Prime’s free tool for SIEM search language conversion. Uncoder relies to enable event schema resolution across platforms.
Uncoder.IO is the online translator for SIEM saved searches, filters, queries, API requests, correlation and Sigma rules to help SOC Analysts, Threat Hunters and SIEM Engineers. Serving as one common language for cyber security it allows blue teams to break the limits of being dependent on single tool for hunting and detecting threats.
Also consider this nice initiative from Alex Teixeira:
https://github.com/inodee/spl-to-kql
Summary
We just walked through the process of standing up Azure Sentinel Side-by-Side with Splunk. Stay tuned for more Side-by-Side details in our blog channel.
by Contributed | Nov 17, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Abstract:
With DevOps, we now deploy Production code every so often, hence our security practices need to evolve too.
In this session we will covering DevSecOps which is about making security an important part of DevOps process and see how to implement a secure and compliant development process in our Azure DevOps pipelines so we can run security tests during Continuous Integration (CI) builds and Continuous Deployment (CD) releases.
Webinar Date & Time : December 11, 2020. Time 4.00 PM IST (10.30 AM GMT)
Invite : Download the Calendar Invite
Speaker Bio:

Ahetejazahmad Khan is working as a Support Engineer at Azure DevOps Team, India. His day to day role is to enable DevOps engineers around the world to achieve more. His role provides him a unique view of technology and customers. He partners with Field teams, Products groups to help our customers and developers. He currently focuses on Azure DevOps technologies. He is a Microsoft Certified DevOps: Support Specialist, Azure Fundamentals and Azure Administrator Associate.

Avina Jain is a Support Engineer with Azure DevOps team in India. Her role gives her the opportunity to work with engineers and help them address their business goals and strategic solution requirements.
She is always focused on providing a positive experience to customers.
Devinar 2020
by Contributed | Nov 17, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Abstract:
In the present tech world, we are always lookout for better options on how we can quickly get an environment up and running and also minimize the costs, along with various other features.
Get introduced to the world of Azure DevTest Labs and Azure Classroom labs and the various features that they have to offer in a testing/development environment and also for students and universities.
Starting from cost management to quick implementation of a course lab, this session will be perfect for you to onboard to Azure Labs Services, how to tailor them to suit your scenario and also get clarified on any doubt that you may have.
Webinar Date & Time : December 10, 2020. Time 4.00 PM IST (10.30 AM GMT)
Invite : Download the Calendar Invite
Speaker Bio:

Siva is an Escalation Engineer with Azure DevOps and Azure Lab Services team in Microsoft. In Azure Lab Services, he helps customers right from their setup to troubleshooting complex issues and also works closely with product team to improve customer experience. In Azure DevOps, he specializes in moving on premise instances to cloud, CI/CD and also open source integrations in Azure DevOps. His area of interests include staying up to date on DevOps technologies, tools and concepts and also evangelizing Azure Lab Services.

Nitesh is an ardent engineer having 8 years of experience working on different Microsoft technologies such as Azure DevOps, Azure IaaS and PaaS, Microsoft Intune, SCCM and ADDS. Currently in Microsoft working extensively with different DevOps tools to design and implement infrastructure of premier customers. Also providing support to them and resolving issues within SLAs. Besides, working on helping customers with Azure Lab Services as well. Other than work he loves playing snooker, cricket, badminton.
Devinar 2020
by Contributed | Nov 17, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Abstract:
Containers run the world today and we see how many applications are running on containers in present world and a seamless deployment of containers becomes very important in this scenario.
This session offers you an introduction to build and deploy a Docker image of ASP.Net Web application using the Azure DevOps pipeline.
The docker images will be pushed to the Azure Container Registry and will be run using the Azure Web App Service containers.
Webinar Date & Time : December 9, 2020. Time 2.00 PM IST (8.30 AM GMT)
Invite : Download the Calendar Invite
Speaker Bio:

Ramprasath works as Support Engineer in Azure DevOps team in Microsoft and his role gives him opportunities to interact with the Azure DevOps customers across the world and help them with their issues. He is interested in Web Application development, IOT and on free time does hobby projects on Raspberry Pi.

Krishna is currently working as support Engineer for Microsoft Azure DevOps team. In daily activities, he interacts with world-wide customers and enable them to effectively use Azure DevOps to build scalable and robust solution . He is passionate about the cloud technologies specifically Azure stack and likes to work on IAAC along with containers.
Devinar 2020
by Contributed | Nov 17, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
Abstract:
Azure virtual machine scale set agents are a new form of self-hosted agents that can be auto scaled to meet customer demands.
This elasticity reduces the need to run dedicated agents all the time. Unlike Microsoft-hosted agents, customers have flexibility over the size and the image of machines on which agents run.
In this session we will introduce you to the Azure DevOps VMSS agents and offer insights into the setup, working and also basic troubleshooting of the agents.
Webinar Date & Time : November 30, 2020. Time 4.00 PM IST (10.30 AM GMT)
Invite : Download the Calendar Invite
Speaker Bio :

Muni Karthik currently works as a Support Engineer in Microsoft, India. His day to day responsibilities include assisting customers overcome the challenges that they face with the different Team Foundation Server and Azure DevOps services. His interests are exploring the latest DevOps concepts and providing good support experience to the customers. He partners with Field teams, Product engineering groups to help our customers and developers.

Kirthish Kotekar Tharanatha is working as Support Escalation Engineer in Azure DevOps team in Microsoft and he helps the customers to resolve the most complex of issues in Azure DevOps day to day. His area of interests include the CI and CD part of Azure DevOps and also various other open CI/CD tools integration with Azure DevOps.
Devinar 2020
by Contributed | Nov 16, 2020 | Azure, Microsoft, Technology
This article is contributed. See the original author and article here.
What is covered in this blog post?
Recently I went through a Cost Optimization exercise of evaluating Azure Synapse Reserved instance pricing with my customer and this blog post documents the learnings from that exercise as I think it would be beneficial for others as well. The blog post covers following main points:
- Shares additional perspective with examples on cost optimization for Azure Synapse using Reserved Instance pricing when you plan to run Azure Synapse Instance at variable DWU Levels.
- Shares the example Excel spreadsheet (attached with this post) which can be used to play around with the usage patterns (variable DWU Levels) and cost estimates (Please refer to Pricing page for most up to date pricing information, the spreadsheet uses Central US as an example).
- Summarizes the main aspects of how the Azure Reserved Instance Pricing works (along with links to public documentation) in case you are not familiar
RI is the abbreviation I will use at times to refer to Reserved Instance pricing
Background
The main benefit of the cloud environment is its elastic scale where you can scale up and down as per your needs to save costs. My customer wanted to run Synapse instance at DWU Level 7500 a third of the time, DWU Level 6000 a third of the time and DWU Level 3000 a third of the time. So, the question to address was does it make sense to purchase Synapse Reserved Instance Pricing and how much should be purchased 3000, 6000, 7500 or something in the middle. Such requirements are not uncommon, the example scenario:
- Maybe 7500 DWU higher compute is needed during loads
- 6000 DWU needed during peak usage hours
- 3000 DWU sufficient during off-peak usage hours
Before going into Cost Analysis Examples from customer scenario I will first summarize how Azure Synapse Reserved Instance pricing works.
Summary of how Reserved Instance pricing works
Azure Synapse RI pricing is very flexible and very good cost saving measure. I will be bold enough to make a statement that if you are running a production workload which you don’t plan to sunset in near time most likely Synapse Reserved Instance Pricing would make sense for you.
I am summarizing a few important points around how Azure Synapse Reserved pricing works but you can read more from the official documentation pages – Save costs for Azure Synapse Analytics charges with reserved capacity and How reservation discounts apply to Azure Synapse Analytics
• 1 year Reserved Instance pricing discount is appx 37%
• 3 year Reserved Instance pricing discount is appx 65%
• Synapse charges are a combination of Compute and Storage, Reserved instance pricing is only applicable to Compute and not storage
• Compute charge is calculated as multiples of DWU 100, i.e. DWU 1000 is 10 units of DWU 100, DWU 2000 is 20 units of DWU 100, etc.
• When Azure Synapse Reserved Instance is purchased you are basically purchasing discounted rate for N number of DWU 100 units for a 1 or 3 year commitment
• The Azure Synapse Analytics reserved capacity discount is applied to running warehouses on an hourly basis. If you don’t have a warehouse deployed for an hour, then the reserved capacity is wasted for that hour. It doesn’t carry over but I will share some examples here to put you at comfort that the discount is so big that even if there is some wastages (instance running at a lower SKU then RI purchased) there is a good chance you will still come out ahead.
• The best part about the N units of RI purchased is that it can be shared between multiple instances within the same subscription or across subscription (customers need to decide on the scope of the RI) . As an example, if you purchase 10 units the discount can get applied to both your Dev as well as Prod instance.
• Simple Example for RI purchased for DWU 1000
- Say, Prod runs @ DWU 1000 during peak hours but is brought down to DWU 500 in off-peak hours.
- Say, you have another instance which runs @ DWU 500 but its future is not very clear for you to commit to Reserved instance pricing or maybe it is a Dev instance which goes between DWU 500 and 200 or even paused at time.
- When Prod runs @DWU 1000, you pay discounted rate for Prod and pay as you go rate for the Dev instance but when Prod is @DWU 500 then in that case you pay discounted rate for both of your Synapse Instances.
Costs Analysis Discovery/Learnings
The main objective of the cost analysis exercise was to determine if purchasing Reserved Instance Pricing makes sense when Azure Synapse will be running at variables DWU Levels.
In scenarios where you will be using Azure Synapse at different DWU Levels the bare minimum goal (or success criteria) is that Reserved Instance should not cost more than what you would pay under Pay as you Go cost model.
Example 1
1 Year RI, 10 Units
Low SKU – DWU 500
High SKU – DWU 1000
With 1 Year RI you will still come out ahead if you run at Purchased RI SKU (DWU Level 1000) at least 30% of the time and at least 50% of the Purchased RI SKU (DWU Level 500) 70% of the time.
Example 2
1 Year RI, 30 Units
Low SKU – DWU 500
High SKU – DWU 3000

With 1 Year RI you will still come out ahead if you run at Purchased RI SKU (DWU Level 3000) at least 56% of the time and at least 17% of the Purchased RI SKU (DWU Level 500) 44% of the time.
Example 3
3 Year RI, 75 Units
DWU 3000 34% of the time
DWU 6000 33% of the time
DWU 7500 33% of the time

- When 1 or 3 Year Reserved Instanced Pricing is purchased for DWU 7500 (75 units), even when you run Synapse at a lower scale than RI purchased you will still come out ahead because the discount is so big
- 3 Years RI is so much cheaper that maybe you don’t want to do this scale up and down at all and run the Synapse instance DWU 7500 all the time
Conclusions
- Bottom line is that even if you are losing hourly discounted rate for partial hours you can still come out ahead because discount is so big (3 Year RI discount much bigger than 1 Year RI).
- Lastly, its important to re-iterate thanks to the flexible nature of Azure Reserved Instance pricing that you can have other Azure Synapse instances running in same or different Azure Subscription to make use of the discounted price which is unused capacity on your main Azure Synapse instance for which RI was purchased.
Azure Synapse Scaling Sample Script
In case you are planning scaling your Synapse instances I wanted to add link (https://github.com/microsoft/sql-data-warehouse-samples/blob/master/samples/automation/ScaleAzureSQLDataWarehouse/ScaleAzureSQLDataWarehouse.ps1) to this sample script for completeness. When scaling Azure Synapse connection is dropped momentarily so any running queries will fail. This sample scaling script accounts for active queries before performing scaling operation. Sample is a little old and I have not verified but plan to validate in a few days and update as per my validation results but it should give a good starting point regardless.
Recent Comments