Deploying and Managing Azure Sentinel – Ninja style

Deploying and Managing Azure Sentinel – Ninja style

This article is contributed. See the original author and article here.

Back in January 2020, Javier and Philippe wrote a great blog on how to deploy, configure and maintain Azure Sentinel through Azure DevOps with IaC using the Sentinel API, AzSentinel and ARM templates. We are now a several months further and more and more functions are integrated in AzSentinel. So, I decided to create a new Azure DevOps Pipeline which covers more than only the “deployment” part. I want to show that Pipelines are more than only deployment ‘tools’ and they need to be implemented the right way with the right DevOps mindset for the best result. Or as I call it in this blog post: Ninja style.


 


If you prefer to skip the reading and get started right away then you can find all the code examples on my GitHub Repository and all the steps at the end of this blog post.


 


The story behind DevOps and Pipelines


Before we go deeper into the technical side, I first like to mention the idea behind it all. The reason I’ve invested the time in building AzSentinel and DevOps pipelines. The main reason was to implement the “shift left” Way of Working (Wow). The term ‘shift left’ refers to a practice in software development, in which teams focus on quality, work on problem prevention instead of detection, and begin testing earlier than ever before. The goal is to increase quality, shorten long test cycles and reduce the possibility of unpleasant surprises at the end of the development cycle—or, worse, in production.


 


Azure Portal is a great portal, but when you log in and by accident remove or change for example an Analytic rule without any testing, approving or 4-eye principle, then you really have a challenge. You will probably find out something went wrong when you are troubleshooting to see why nothing happened in first place. And don’t we all know that’s way too late…


 


Shifting left requires two key DevOps practices: continuous testing and continuous deployment. Continuous testing involves automating tests and running those tests as early and often as possible. Continuous deployment automates the provisioning and deployment of new builds, enabling continuous testing to happen quickly and efficiently.


 


Azure Sentinel deployment Ninja style


Based on the shift left and DevOps WoW, I made the design below on how I think the process should look like. I will explain the design in different parts. But first, let’s start with the underlying requirements.


AzureSentinel-Architecture.png


 


Infrastructure as Code


Before we can start implementing shift left for Azure Sentinel, we need to implement an Infrastructure as Code deployment model. Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, even your Azure Sentinel rules and settings) in a descriptive model, using the same versioning as DevOps teams use for source code. Like the principle that the same source code generates the same binary, an IaC model generates the same environment every time it is applied.


 


To be able to manage Azure Sentinel through an IaC model, I build AzSentinel that functions as a translator. The JSON or YAML format that you use to store your Analytic rules or Hunting rules, will be translated into the rules you see in Azure Portal.


 


This is the first step in our journey to shift left. Having our configuration in a descriptive model gives us also the opportunity to analyze and test this to see if it’s compliant to what and how we want it. For example, do you have a naming convention for your rules? Then now you can easily test that to see if it is compliance to what you want. Or just deploy the rules to a dev environment to see if all the properties are set correct.


 


Repository


So now we know why and how we store our changes in a descriptive model, it’s time to see where and how we store our configuration. For this we use Git technology. Git is a free and open source distributed version control system, designed to handle everything from small to very large projects with speed and efficiency. For the repository design, I have made the choice to implement a four branch strategy model, which I will explain below. A branch represents an independent line of development. Branches serve as an abstraction for the edit/stage/commit process. You can think of them as a way to request a brand new working directory, staging area, and project history. You can read more about this if you like.


repo.png


 


Below a short description of the three main repository’s:



  • Master – The Master (also called main) branch contains all the configuration that is deployed to our production environment. The code here has passed all the checks through PR process and is deployed to dev and staging environment before. This is also our point of truth, that means that what is configured here is equal to what’s deployed in Azure

  • Release – The Release branch contains all the configuration that is ready to be deployed to our Sentinel staging environment. The changes are already tested through our PR pipelines and is deployed to dev environment before being merged here. Sentinel Staging environment most of the time contains production or preproduction data so that the changes can be tested against real world data.

  • Development – The Development branch contains all the small changes that are proposed by the engineers. This changes are tested by PR Build validation and deployed to Sentinel dev environment. The changes are only tested to see if there are no braking changes/configuration. Changes are not tested against real world data.


The three branches above belong to our ‘standard’ branches and are used for automation purposes. The fourth branching is actually the ‘User branch’. The User branch is mostly a copy of all the configuration in the Development branch and only contains the changes that an engineer is working on. For example, if an engineer is working on a specific playbook or Analytic rule, then the branch where he is working in only contains changes that are related to that work. If he wants to work on something new then he can create a new branch.


branches.png


 


Working with multiple branches means that changes from one branch to another branch are only imported through a Pull request (PR). A PR means that you create a request to merge your code changes to the next branch, for example from development to release .When you file a PR, all you’re doing is requesting that another developer (e.g., the project maintainer) pulls a branch from your repository into their repository. Click here to read more about this.


 


Branch Policies


The great advantage is that a PR moment gives as the opportunity to configure ‘Branch Policies’. Branch policies help teams protect their important branches of development. Policies enforce your team’s code quality and change management standards. Click here to read more about this.


 


Keep in mind that each branch needs to have it’s own policy set and the policy can be different for each branch. For example, you can have one reviewer when something is getting merged in the Development branch, but two reviewers when you merge your changes to the Release branch. The idea is to come up with a policy that works for the team and doesn’t slow the efficiency of the team. Having a minimum of two reviewers in a three members team is often overkill because then you normally need wait longer before your PR gets approved.


 


Below a couple of the policies that from my opinion are a good start to configure:



  • Require a minimum number of reviewers – Require approval from a specified number of reviewers on pull requests.Picture4.png

  • Check for linked work items – Encourage traceability by checking for linked work items on pull requests.Picture5.png

  • Build Validation – Validate code by pre-merging and building pull request changes (more about this in the next post).Picture6.png


 


Pipeline


Now we have our configuration stored in a descriptive model and have the branches configured correctly, it’s time to implement our automatic test and deployment through Pipelines. Azure Pipelines is a cloud service that you can use to automatically build and test your code project and make it available to other users. It works with just about any language or project type.
Azure Pipelines combines Continuous Integration (CI) and Continuous Delivery (CD) to constantly and consistently test and build your code and ship it to any target. Click here to read more about this.


 


For this post I will be using Azure DevOps pipelines but you can achieve the same results with GitHub Actions.


 


Build Validation pipeline


As mentioned earlier shifting left requires two key DevOps practices: continuous testing and continuous deployment. Continuous testing involves automating tests and running those tests as early and often as possible. Build validation is part of the CI, where we test our changes very early and as often possible.


 


As I described in chapter Branch Policy, one of the options when configuring Branch Policy is to configure Build Validation. Here you can set a policy requiring changes in a pull request to build successfully with the protected branch, before the pull request can be completed. If a build validation policy is enabled, a new build is queued when either a new pull request is created, or if changes are pushed to an existing pull request targeting the branch. The build policy then evaluates the results of the build to determine whether the pull request can be completed.


 


For this post I have decided to create some example tests to show you what the possibilities are. For this I am using Pester: a testing and mocking framework for PowerShell.


Pester provides a framework for writing and running tests. Pester is most commonly used for writing unit and integration tests, but it is not limited to just that. It is also a base for tools that validate whole environments, computer deployments, database configurations and so on. Click here to read more about it.


 


The below Pester test is created to test an Analytic Rules JSON file, to see if it converts from JSON. If this test fails that means that there is a JSON syntax error in the file. Then it will test to see if the configured rule types contain the minim required properties. Please keep in mind this is just an example to demonstrate the possibilities. You can extend this by validating values of certain properties or by even deploying the rule to Azure Sentinel to validate that it doesn’t contain any errors.


 


Build validation pipeline:


 


 

# Build Validation pipeline
# This Pipeline is used to trigger teh Pester test files when a PR is created

trigger: none

pool:
  vmImage: 'ubuntu-latest'

steps:

- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: 'Invoke-Pester *.tests.ps1 -OutputFile ./test-results.xml -OutputFormat NUnitXml'
    errorActionPreference: 'continue'
    pwsh: true

- task: PublishTestResults@2
  inputs:
    testResultsFormat: 'NUnit'
    testResultsFiles: '**/test-results.xml'
    failTaskOnFailedTests: true

 


 


PowerShell Pester test:


 


 

Describe "Azure Sentinel AlertRules Tests" {

    $TestFiles = Get-ChildItem -Path .SettingFilesAlertRules.json -File -Recurse | ForEach-Object -Process {
        @{
            File          = $_.FullName
            ConvertedJson = (Get-Content -Path $_.FullName | ConvertFrom-Json)
            Path          = $_.DirectoryName
            Name          = $_.Name
        }
    }

    It 'Converts from JSON | <File>' -TestCases $TestFiles {
        param (
            $File,
            $ConvertedJson
        )
        $ConvertedJson | Should -Not -Be $null
    }

    It 'Schedueled rules have the minimum elements' -TestCases $TestFiles {
        param (
            $File,
            $ConvertedJson
        )
        $expected_elements = @(
            'displayName',
            'description',
            'severity',
            'enabled',
            'query',
            'queryFrequency',
            'queryPeriod',
            'triggerOperator',
            'triggerThreshold',
            'suppressionDuration',
            'suppressionEnabled',
            'tactics',
            'playbookName'
        )

        $rules = $ConvertedJson.Scheduled

        $rules.ForEach{
            $expected_elements | Should -BeIn $_.psobject.Properties.Name
        }
    }

    It 'Fusion rules have the minimum elements' -TestCases $TestFiles {
        param (
            $File,
            $ConvertedJson
        )
        $expected_elements = @(
            'displayName',
            'enabled',
            'alertRuleTemplateName'
        )

        $rules = $ConvertedJson.Fusion

        $rules.ForEach{
            $expected_elements | Should -BeIn $_.psobject.Properties.Name
        }
    }

    It 'MLBehaviorAnalytics rules have the minimum elements' -TestCases $TestFiles {
        param (
            $File,
            $ConvertedJson
        )
        $expected_elements = @(
            'displayName',
            'enabled',
            'alertRuleTemplateName'
        )

        $rules = $ConvertedJson.MLBehaviorAnalytics

        $rules.ForEach{
            $expected_elements | Should -BeIn $_.psobject.Properties.Name
        }
    }

    It 'MicrosoftSecurityIncidentCreation rules have the minimum elements' -TestCases $TestFiles {
        param (
            $File,
            $ConvertedJson
        )
        $expected_elements = @(
            'displayName',
            'enabled',
            'description',
            'productFilter',
            'severitiesFilter',
            'displayNamesFilter'
        )

        $rules = $ConvertedJson.MicrosoftSecurityIncidentCreation

        $rules.ForEach{
            $expected_elements | Should -BeIn $_.psobject.Properties.Name
        }
    }
}

 


 


Build validation test Results


Below you can see an example where the validation failed and blocked our PR from merging to ‘develop’ branch because the Pester tests didn’t pass all the tests.


Picture7.png


 


When you click on the test results you see that there are two errors found in our AlertRules.json file


Picture8.png


 


As you can see below our test expects the property “Displayname” but found instead “DisplayNameeee“.


Picture9.png


 


Deployment pipeline


As mentioned earlier Shifting left requires two key DevOps practices: continuous testing and continuous deployment. Continuous deployment automates the provisioning and deployment of new builds, enabling continuous testing to happen quickly and efficiently.


 


For the deployment I have made the choice to use a multi staged pipeline. Azure DevOps multi stage pipelines is an exciting feature! Earlier, it was possible to define CI pipelines in Azure DevOps using YAML formatted files. With multi stage pipelines, it is also possible to define CI and CD pipelines as code and version them the same way code is versioned. With this we can author a single pipeline template that can be used across environments. What’s also a really nice feature of multi staged pipeline, is that you can configure environment policies for each stage. Environments are the way how Multistage YAML Pipelines handle approvals. I will explain environments in more detail below.


Picture10.png


 


Pipeline Environment


An environment is a collection of resources, such as Kubernetes clusters and virtual machines, that can be targeted by deployments from a pipeline. Typical examples of environment names are Dev, Test, QA, Staging, and Production.


 


Environments are also the way Multistage YAML Pipelines handle Approvals. If you are familiar with the Classic Pipelines, you know that you can set up pre- and post-deployment approvals directly from the designer.


 


For this pipeline I have created three environments in Azure DevOps, with each representing an Azure Sentinel environment. You can of course have more environments, for example for each customer.


 


With environments we can for example configure that all the deployment to Dev environment doesn’t require any approval. When you are deploying to staging, however, one person needs to approve the deployment and when you are deploying to production two people need to approve the deployment.


 


Approvals and other checks are not defined in the YAML file to avoid that users modifying the pipeline YAML file could modify also checks and approvals.


 


How to create a new environment


Go to your Azure DevOps project and click on Environment under the Pipelines tab, here click to create a new Environment. Enter the Name that you want to use and select None under Resource.


 


After creating the environment, click on the environment and then click on the three dots in the right corner above. Here you can select “Approvals and checks” to configure users or groups that you want to add, that need to approve the deployment.


Picture11.png


 


Picture12.png


Click on see all to see an overview of all the other checks that you can configure


 


Picture13.png


 


Multistaged pipeline


Below is our main pipeline which contains all the stages of the deployment. As you can see, I have configured three stages, each stage representing an Azure Sentinel environment in this case. Stages are the major divisions in a pipeline: ‘build this app’, ‘run these tests’, and ‘deploy to pre-production’ are good examples of stages. They are a logical boundary in your pipeline at which you can pause the pipeline and perform various checks.


 


Every pipeline has at least one stage, even if you do not explicitly define it. Stages may be arranged into a dependency graph: ‘run this stage before that one’. In this case it’s in a sequence, but you can manage the order and dependency’s through conditions. The main pipeline is in this case only used to define our stages/environments and store the specific parameter for each stage. In this Pipeline you don’t see the actual tasks, this is because I’m making use of Pipeline Template. Below you can read more about this great functionality.


 


In the example below you also see that I have configured conditions on Staging and Production Stage. This condition checks if the pipeline is triggered from the Release or Master branch. If not then those stages are automatically skipped by the pipeline.


 


Tip: if you name your main template ‘Azure-Pipeline.yml’ and put it in the root of your branch, then Azure DevOps will automatically create the pipeline for you


 


 

# This is the main pipelien which covers all the stages
# The tasks are stored in pipelines/steps.yml

stages:
  - stage: Dev
    displayName: 'Deploying to Development environment'
    jobs:
      - template: pipelines/steps.yml
        parameters:
          environment: Dev
          azureSubscription: ''
          WorkspaceName: '' # Enter the Azure Sentinel Workspace name
          SubscriptionId: 'cd466daa-3528-481e-83f1-7a7148706287'
          ResourceGroupName: ''
          ResourceGroupLocation: 'westeurope'
          EnableSentinel: true
          analyticsRulesFile: SettingFiles/AlertRules.json # leave empty if you dont want to configure Analytic rules
          huntingRulesFile: SettingFiles/HuntingRules.json # leave empty if you dont want to configure Hunting rules
          PlaybooksFolder: Playbooks/ # leave empty if you dont want to configure Playbooks
          ConnectorsFile: SettingFiles/DataConnectors.json # leave empty if you dont want to configure Connectors
          WorkbooksFolder: Workbooks/
          WorkbookSourceId: '' # leave empty if you dont want to configure Workbook

  - stage: Staging
    displayName: 'Deploying to Acceptance environment'
    condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/release'))
    dependsOn: Dev    # this stage runs after Dev
    jobs:
      - template: pipelines/steps.yml
        parameters:
          environment: Staging
          azureSubscription: ''
          WorkspaceName: '' # Enter the Azure Sentinel Workspace name
          SubscriptionId: 'cd466daa-3528-481e-83f1-7a7148706287'
          ResourceGroupName: ''
          ResourceGroupLocation: 'westeurope'
          EnableSentinel: true
          analyticsRulesFile: SettingFiles/AlertRules.json # leave empty if you dont want to configure Analytic rules
          huntingRulesFile: SettingFiles/HuntingRules.json # leave empty if you dont want to configure Hunting rules
          PlaybooksFolder: Playbooks/ # leave empty if you dont want to configure Playbooks
          ConnectorsFile: SettingFiles/DataConnectors.json # leave empty if you dont want to configure Connectors
          WorkbooksFolder: Workbooks/
          WorkbookSourceId: '' # leave empty if you dont want to configure Workbook

  - stage: Production
    displayName: 'Deploying to Production environment'
    condition: and(succeeded(), eq(variables['build.sourceBranch'], 'refs/heads/master'))
    dependsOn: Dev    # this stage runs after Dev
    jobs:
      - template: pipelines/steps.yml
        parameters:
          environment: Production
          azureSubscription: ''
          WorkspaceName: ''  # Enter the Azure Sentinel Workspace name
          SubscriptionId: 'cd466daa-3528-481e-83f1-7a7148706287'
          ResourceGroupName: ''
          ResourceGroupLocation: 'westeurope'
          EnableSentinel: true
          analyticsRulesFile: SettingFiles/AlertRules.json # leave empty if you dont want to configure Analytic rules
          huntingRulesFile: SettingFiles/HuntingRules.json # leave empty if you dont want to configure Hunting rules
          PlaybooksFolder: Playbooks/ # leave empty if you dont want to configure Playbooks
          ConnectorsFile: SettingFiles/DataConnectors.json # leave empty if you dont want to configure Connectors
          WorkbooksFolder: Workbooks/
          WorkbookSourceId: '' # leave empty if you dont want to configure Workbook

 


 


Below you see an example when the pipeline is triggered from the ‘master’ branch, it firsts deploys to the dev branch and then to the production environment but skips the ‘staging’ environment


Picture14.png


 


Below you see another example where the pipeline is triggered from a branch other than ‘release’ or ‘master’. in this case the changes are only deployed to the development environments and are the other environments skipped automatically:


Picture15.png


 


Pipeline Template


Pipeline Templates let us define reusable content, logic, and parameters. Templates function in two ways. You can insert reusable content with a template, or you can use a template to control what is allowed in a pipeline.


 


If a template is used to include content, it functions like an include directive in many programming languages. Content from one file is inserted into another file. When a template controls what is allowed in a pipeline, the template defines logic that another file must follow. Click here to read more about this.


 


In our case the stages define our Azure Sentinel environments, so the biggest differences are things like subscription name, Sentinel workspace name, etc.


 


Because most of the steps are the same for all the environments, we can keep all the steps standardized and simplified for usage. This way it’s also easier to update our pipelines. For example, you decide to add an additional step which need to be implemented for all your environments/customers. This way, you only need to update only the steps.yml file instead of (all) your pipelines. This reduces code duplication, which makes your pipeline more resilient for mistakes. Also this way you are sure that if you add a new step it is also first tested in dev stage before going to prod. So to sum it up, now you have a CI/CD process for your pipeline too.


 


What you also see here is that I have configured conditions for all the steps. This makes the template much more dynamic for usage. For example if you haven’t provided a template for Hunting Rules, which means you don’t want to configure Hunting rule. So the importing Hunting rules step will be automatically skipped. This way we can configure our customers with the same template file but with different input.


 


 

# This is the Template that is used from the Main pipeline
# This template contains all the required steps

parameters:
  - name: environment
    displayName: environment name
    type: string

  - name: azureSubscription
    displayName: Enter the Azure Serviceconntion
    type: string

  - name: SubscriptionId
    displayName: Enter the Subscription id where the Azure sentinel workspace is deployed
    type: string

  - name: WorkspaceName
    displayName: Enter the Azure Sentinel Workspace name
    type: string

  - name: EnableSentinel
    displayName: Enable Azure Sentinel if not enabled
    type: boolean

  - name: analyticsRulesFile
    displayName: path to Azure Sentinel Analytics ruile file
    type: string

  - name: huntingRulesFile
    displayName: path to Azure Sentinel Hunting ruile file
    type: string

  - name: PlaybooksFolder
    displayName: The path to the fodler with the playbook JSON files
    type: string

  - name: ConnectorsFile
    displayName: The path to DataConnector json file
    type: string

  - name: WorkbooksFolder
    displayName: The path to the folder which contains the Workbooks JSON files
    type: string

  - name: WorkbookSourceId
    displayName: The id of resource instance to which the workbook will be associated
    type: string

  - name: ResourceGroupName
    displayName: Enter the Resource group name for Playbooks and Workbooks
    type: string

  - name: ResourceGroupLocation
    displayName: Enter the Resource group location for Playbooks and Workbooks
    type: string

jobs:
  - deployment: 'Sentinel'
    displayName: DeploySentinelSolution
    pool:
      vmImage: 'ubuntu-latest'
    environment: ${{ parameters.environment }}
    strategy:
      runOnce:
        deploy:
          steps:
            - checkout: self
            - task: PowerShell@2
              displayName: 'Prepare environemnt'
              inputs:
                targetType: 'Inline'
                script: |
                  Install-Module AzSentinel -Scope CurrentUser -Force
                  Import-Module AzSentinel
                pwsh: true

            - ${{ if eq(parameters.EnableSentinel, true) }}:
              - task: AzurePowerShell@5
                displayName: 'Enable and configure Azure Sentinel'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    Set-AzSentinel -SubscriptionId ${{ parameters.SubscriptionId }} -WorkspaceName ${{ parameters.WorkspaceName }} -Confirm:$false
                  azurePowerShellVersion: 'LatestVersion'
                  pwsh: true

            - ${{ if ne(parameters.PlaybooksFolder, '') }}:
              - task: AzurePowerShell@4
                displayName: 'Create and Update Playbooks'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    $armTemplateFiles = Get-ChildItem -Path ${{ parameters.PlaybooksFolder }} -Filter *.json

                    $rg = Get-AzResourceGroup -ResourceGroupName ${{ parameters.ResourceGroupName }} -ErrorAction SilentlyContinue
                    if ($null -eq $rg) {
                      New-AzResourceGroup -ResourceGroupName ${{ parameters.ResourceGroupName }} -Location ${{ parameters.ResourceGroupLocation }}
                    }

                    foreach ($armTemplate in $armTemplateFiles) {
                      New-AzResourceGroupDeployment -ResourceGroupName ${{ parameters.ResourceGroupName }} -TemplateFile $armTemplate
                    }
                  azurePowerShellVersion: LatestVersion
                  pwsh: true

            - ${{ if ne(parameters.analyticsRulesFile, '') }}:
              - task: AzurePowerShell@5
                displayName: 'Create and Update Alert Rules'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    Import-AzSentinelAlertRule -SubscriptionId ${{ parameters.SubscriptionId }} -WorkspaceName ${{ parameters.WorkspaceName }} -SettingsFile ${{ parameters.analyticsRulesFile }}
                  azurePowerShellVersion: 'LatestVersion'
                  pwsh: true

            - ${{ if ne(parameters.huntingRulesFile, '') }}:
              - task: AzurePowerShell@5
                displayName: 'Create and Update Hunting Rules'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    Import-AzSentinelHuntingRule -SubscriptionId ${{ parameters.SubscriptionId }} -WorkspaceName ${{ parameters.WorkspaceName }} -SettingsFile ${{ parameters.huntingRulesFile }}
                  azurePowerShellVersion: 'LatestVersion'
                  pwsh: true

            - ${{ if ne(parameters.ConnectorsFile, '') }}:
              - task: AzurePowerShell@5
                displayName: 'Create and Update Connectors'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    Import-AzSentinelDataConnector -SubscriptionId ${{ parameters.SubscriptionId }} -Workspace ${{ parameters.WorkspaceName }} -SettingsFile ${{ parameters.ConnectorsFile }}
                  azurePowerShellVersion: LatestVersion
                  pwsh: true

            - ${{ if ne(parameters.WorkbooksFolder, '')}}:
              - task: AzurePowerShell@4
                displayName: 'Create and Update Workbooks'
                inputs:
                  azureSubscription: ${{ parameters.azureSubscription }}
                  ScriptType: 'InlineScript'
                  Inline: |
                    $armTemplateFiles = Get-ChildItem -Path ${{ parameters.WorkbooksFolder }} -Filter *.json

                    $rg = Get-AzResourceGroup -ResourceGroupName ${{ parameters.ResourceGroupName }} -ErrorAction SilentlyContinue
                    if ($null -eq $rg) {
                      New-AzResourceGroup -ResourceGroupName ${{ parameters.ResourceGroupName }} -Location ${{ parameters.ResourceGroupLocation }}
                    }

                    foreach ($armTemplate in $armTemplateFiles) {
                      New-AzResourceGroupDeployment -ResourceGroupName ${{ parameters.ResourceGroupName }} -TemplateFile $armTemplate -WorkbookSourceId ${{ parameters.WorkbookSourceId }}
                    }
                  azurePowerShellVersion: LatestVersion
                  pwsh: true

 


 


Getting started


So now that we have discussed all the important topics, we can start creating things in Azure DevOps. This is a high-level list of tasks that we will perform.


 


Steps:



  1. Create an Azure DevOps organization – link

  2. Create an Azure DevOps Project – link

  3. Create a service connection to your Azure environment/s – link

  4. Get the code in your repository by importing it from GitHublink

  5. Update the “Azure-Pipeline.yml” file and fill in all the applicable parameters

  6. Create and configured the environments – link

  7. Create Build validation pipeline from YAML by importing Build.Validation.yml

  8. Create the 3 default branches (copy from master) – link

  9. Configure Git branch policies for all 3 branches – link

  10. Create Deployment pipeline from YAML by importing Azure-Pipeline.yml


 


The end..


Finally! We’ve reached the end of this blog. If you read all of it: thank you for your dedication! I know it took some time. Anyway, I hope that the blog has taught you something about DevOps and the ‘why’ behind this way of working with Azure Sentinel.

Do you still have any questions regarding this subject? Know that you can always hit me up ;).
All the code is published on my GitHub and is free for use. Do you have any ideas or contributions? Please add this on the GitHub project. This way we can spread our knowledge and make the community better!


 

Updating Microsoft Certifications: How we keep them relevant

This article is contributed. See the original author and article here.

Microsoft is committed to reviewing certifications regularly to help ensure that they remain relevant and technically accurate and that they’re assessing the skills need to thrive in a cloud-based world. Our reviews help ensure that we’re evaluating the right skills for a given job role—and only what needs to be evaluated for that particular role. This is important to understand because we cannot assess everything that’s required for success, so we have to prioritize (although that’s not to say that Microsoft won’t create training for those additional skills). In other words, the learning associated with a given job role is more comprehensive than what we can measure on the exam.

Here’s how it works…Microsoft reviews the objective domain every two months for our role-based and specialty exams to make sure they stay up to date and relevant. This review typically takes the form of revising, removing, and, occasionally, adding objectives. In addition, every year, we’re committed to reviewing the job task analysis (JTA)—the basis of the certification. (As you can see with all of these reviews, we’re serious about trying to maintain the relevance, integrity, and value of our certifications!)

As some of you might know, the JTA is the process through which we define the knowledge, skills, and abilities that are critical to success in a job role. The JTA is the foundation for each of our role-based and specialty certifications and for the learning content and hands-on experiences that we build so you can practice your skills.

Often the changes in these JTA reviews (we refer to them as refreshes) are small, reflecting what I like to call evolutions of the job role—small updates that accumulate over time to become something bigger—but these are baby steps that don’t affect someone’s ability to pass the exam. With small changes like these, we’ll update the exam through an in-service update. This means that the update is seamlessly integrated in one of our regularly scheduled exam updates that happen every two months. To support your exam preparation, we provide the details of these updates on our exam details pages, with a marked-up version of the objective domain. This allows you to clearly see what’s changing and when those changes go into effect. Because we cannot proactively notify test takers of impending changes (privacy rules affect our ability to contact you), you need to be proactive about regularly checking the exam details pages to understand when updates are being made. We post these updates at least 30 days in advance of the date when they’ll appear on the exam.

Occasionally, those changes are bigger, reflecting what I like to call a revolution in the job role. When this happens, there are several possible outcomes. Let’s talk about each of them.

New exams with new exam numbers

As long as the job role is still relevant and important in the industry, when the JTA reveals a significant change, we’ll create a new exam for that job role—and it will have a new exam number. This alerts the test taker that the exam has changed quite a bit and lets them know exactly what will be covered on the exam that they choose. New exams with new numbers are needed when more than one-third of the exam content has changed, meaning that the changes could affect someone’s ability to pass the exam if they didn’t realize that the exam had been updated. In other words, we change the exam number because it is the most effective way to communicate to you as a test taker that the exam content has changed significantly—so you can prepare accordingly.

Because it’s a new exam, it’s first available as a beta exam. This gives you the choice to continue preparing to take the “original” version of the exam or to take the new exam that aligns to the revised job role. The choice is yours, and we always keep the old exam in market for 90 days after the new exam is launched. This gives you time to transition to the new exam. If you’ve been preparing for the current version of the exam, you can still take it during this transition period if you want; however, that version will retire at the end of the 90-day window—no extensions and no exceptions, so plan accordingly. If you have to retake the exam, it might no longer be available.

Splitting or consolidating exams

Depending on the nature of the changes made during the JTA refresh, we might also decide to split an exam into two—or we might decide to consolidate two exams into one. We split an exam when the job role has expanded to such an extent that we can no longer cover the core skills and abilities in a single four-hour exam (our maximum seat time). We do everything we can to keep the certification paths straightforward, with a single exam. But, at times, it’s just not possible to do this and still provide a valid and reliable measurement of the required skills.

On the flip side, we might decide that requiring multiple exams to earn a certification is overly burdensome, doesn’t reflect market expectations, or is simply no longer needed because we can design the exam in such a way that if someone demonstrates competence in one area, we can assume competence in other areas (for example, measuring a more difficult or complex skill and assuming that the test taker has the foundational knowledge or skills needed to perform that more complex task). We rarely consolidate exams as a result of a contracted job role; it’s usually because we’ve learned something about the role that allows us to reimagine the assessment process with fewer exams.

Note that in both the consolidation or splitting of exams, new exams with new numbers will be created and will go through the beta exam process we described earlier.

Renaming certifications and/or exams

In rare cases, the JTA refresh might indicate that the name of the job role has changed or that we need to modify how we’re referring to it in some meaningful way. This means that we need to change the name of the certification and, usually, by extension, the name of the exam. The new name will reflect our vision for the job role and the required skills based on industry and subject matter expertise input. As an example, we renamed the Azure DevOps Engineer certification to DevOps Engineer to better reflect the future direction of that job role and to remove the confusion that this certification was based on Azure DevOps (the product) rather than on DevOps Engineer (the job role).

Name changes are generally transparent to candidates. If you’ve already earned the certification and its name changes, the new name will be reflected on your transcript. Rarely does this result in an exam number change.

Retiring certifications

With the rapid pace of change in cloud-based job roles, it shouldn’t be surprising that some job roles become less relevant over time or that this happens more quickly than it did in the past. Although these decisions sometimes result from the JTA refresh process, we often learn of the possible need to reimagine a certification in completely different way before the annual JTA refresh is due. In those cases, we’ll work with internal and external subject matter experts to decide what to do about the existing certification. If the job role as we originally imagined it no longer makes sense, given the direction the industry is headed, we’ll retire the certification. In most cases, we’ll replace it with a job role certification that’s more relevant and valuable to you, our partners, and organizations as they continue on their digital transformation journeys. This is what happened when we retired Teamwork Administrator and replaced it with Teams Administrator.

When we choose to retire a certification, we’ll provide at least 90 days’ notice so that you can complete the certification requirements if you so choose. But keep in mind that after the exams retire, you won’t be able to take them, so make sure you pass all the certification requirements before that date. In addition, you won’t be able to renew that certification—meaning that you won’t be able to extend the expiration date on your transcript beyond the date that it expires by default after earning it.

Answers to some of your questions

Based on my experience with certification and exam changes, I have an idea of some of the questions you’re likely to have. Here are some of the most commonly asked questions, along with their answers.

Does the name of the certification change when you change an exam number?

  1. Typically, the name of the associated certification doesn’t change when we release a new exam with a different number. Regardless of whether you earned it with the old or new exam, you’ll have the same certification, but the skills that appear on your badge will reflect those that were part of the exam(s) you took to earn it.

I’ve noticed that you released a new exam for the job role, but I’ve been studying for the current version of the exam for that job role. What should I do—continue on this path or start on the new one?

  1. Truly, this is up to you. You need to do what makes sense for your certification journey. You’ll eventually be tested on the new skills as part of our certification renewal requirements, even if you choose to take the old version now. The key consideration is whether you want to be validated on skills that Microsoft believes are more closely aligned to the job role today or to continue as you’ve planned and to update your skills through our recertification process and your own learning journey after you earn the certification with the version of the exam in market today. We offer sufficient notice of these changes so you can make the decision that best fits your journey.

If you choose to take the current version of the exam, check its retirement date and plan your exam accordingly. Note that after the exam is retired, you won’t be able to retake it if you fail your attempt.

When you make changes to an exam, will the self-paced learning content and instructor-led training be updated at the same time?

  1. Yes. We’re committed to having the self-paced learning content updated within seven days of the release of the updated exam, and the instructor-led training will be updated within 30 days. If you’re planning to take an instructor-led training course, please make sure that the provided learning is for the version of the exam that you want to take. Some partners will continue to offer the training associated with the old version of the exam until the exam retires, some learning partners will provide both versions, and some will transition immediately to the new version. Be sure to confirm with the partner that’s delivering the training, so you get the training you need.

What’s a beta exam?

  1. Exam questions are pilot tested in an exam-like situation known as a beta exam. This helps to ensure that only the best content is included in the live exam. Through this process, Microsoft gathers psychometric data on the quality of the questions. Based on this data and on candidate comments, we work with subject matter experts to finalize the items that will be scored and that will appear on the live exam and we set the cut score. For this reason, if candidates take the beta exam, they won’t receive a score immediately. However, they’ll have a meaningful voice in what’s included on the final exam. This is one of the few opportunities in which someone outside the exam development subject matter experts can have significant influence on the exam content.

When you update an exam number, why is the new exam released as a beta version?

  1. Because many of the skills measured by these exams are new (which is why we use a new exam number), we must develop many new questions to cover those skills. These new questions make up a substantial portion of the new exam and need to be pilot tested to help ensure that they are psychometrically sound, valid, reliable, and fair. The beta process allows us to validate the quality of the items that we include on the live exam and lets us know that we’re measuring the right skills.

Through the beta process, we give candidates an active voice into the certification and exam design. Their data and comments are used to evaluate the quality of the items, identify fixes needed to improve accuracy and clarity, and flag questions that must be removed. The data and information we gather through the beta process is, perhaps, the most important part of ensuring the validity and overall quality of the exam.

When will Microsoft update certification and exam pages with changes?

  1. We continuously update exam pages with upcoming and applied updates. Candidates should visit those pages frequently to stay up to date with changes to an exam and to learn when they’ll go into effect.

Settlement requires Zoom to better secure your personal information

This article was originally posted by the FTC. See the original article here.

Daily life has changed a lot since the pandemic started. Because face-to-face interactions aren’t possible for so many of us, we’ve turned to videoconferencing for work meetings, school, catching up with our friends, even seeing the doctor. When we rely on technology in these new ways, we share a lot of sensitive personal information. We may not think about it, but companies know they have an obligation to protect that information. The FTC just announced a case against videoconferencing service Zoom about the security of consumers’ information and videoconferences, also known as “Meetings.”

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure Advocates Weekly Round Up – .NET Learn Challenge, MS Exam Prep help and more!

This article is contributed. See the original author and article here.

It’s November, .NET November that is. Check out the .NET Learn Challenge get over 35 hours of FREE Learning. More Exam help posts from the team this week amongst many other great code samples, blog posts and videos!


 


Datacenter Migration & Azure Migrate – Sarah Lean
Sarah Lean


In this Skylines Summer Session, Sarah Lean, #Microsoft #Cloud Advocate, is interviewed by Richard Hooper and Gregor Suttie and discusses


 


 


Shayne sits down with LayalCodesIt on Twitch
Shayne Boyer


Hang out with Shayne on Layla’s channel this week to hear more about the .NET Learn Challenge and working within Developer Relations at Microsoft.


 


How to Create a No Code AI App with Azure Cognitive Services and Power Apps
Aysegul Yonet


You might have an idea for an application using AI and not have anyone to build it. You might be a programmer and want to try out your ideas and Azure


 


Azure Stack Hub Partner Solutions Series – telkomtelstra
Thomas Maurer


telkomtelstra is a Service Provider working with Enterprise customers across Indonesia. Check out their journey with Azure Stack Hub!


 


VLC Energy Optimization With GPU | Sustainable Software
Asim Hussain


For the past few years, sustainable software engineering has arisen as one of the major topics in the daily discussions I have with software developers. Due to the advancements in technology, as well as the increasing awareness we share on climate change and the overall impact of tech on the environment,


 


microsoft/iot-curriculum
Jim Bennett


Hands on labs and content for students and educators to learn and teach the Internet of Things at schools, universities, coding clubs, community colleges and bootcamps – microsoft/iot-curriculum


 


What is Azure Defender?
Sonia Cuff


Learn about the improved experience for managing security across hybrid and multi-cloud environments with the Azure Defender XDR product.


 


All you need to know about Microsoft Exams
Sarah Lean


Taking any exam can be a stressful experience which requires a lot of preparation and planning.  Let’s look at the process for studying, booking and sitting a Microsoft exam. 


 


Introduction to #WebXR with Ayşegül Yönet
Aysegul Yonet


? WebXR is the latest evolution in the exploration of virtual and augmented realities. Sounds interesting, right? Dive into the Basics of WebXR with Ayşegül.


 


Build a web API with Node.js and Express
Yohan Lasorsa


Learn how to use Node.js and Express to create a RESTful web API with this series of bite-sized videos for beginners. 


 


Microsoft 365 PnP Weekly – Episode 103 – Microsoft 365 Developer Blog
Waldek Mastykarz


Connect to the latest conferences, trainings, and blog posts for Microsoft 365, Office client, and SharePoint developers. Join the Microsoft 365 Developer Program.


 


.NET Learning Challenge!
Shayne Boyer


Compete, Learn, and Develop Skills https://aka.ms/dotnetskills November 1, 2020 – November 30th. 


 


How to get started learning Microsoft Azure and Cloud Computing
Thomas Maurer


Here is how to get started with learning Microsoft Azure and Cloud Computing. Check out this blog and learn how to get started with Azure!


 


Sarah Lean


Interview with Sarah Lean, Senior Cloud Advocate at Microsoft


 


Experts Live Switzerland 2020: Azure Hybrid – Learn about Hybrid Cloud Management
Thomas Maurer


Now the recording of my session: “Azure Hybrid – Learn about Hybrid Cloud Management” is available! With an overview on Microsoft Azure Arc!


 


AZ-220 IoT Developer Certification Study Guide
Paul DeCarlo


Microsoft offers an official IoT Developer Specialty Certification which requires passing the AZ-220 Certification Exam . 


 


Environment Monitor Lab – Using a Raspberry Pi
Jim Bennett


A walkthrough of the Environment Monitor IoT lab available at https://aka.ms/iot-curriculum/env-monitor using a Raspberry Pi and a Grove Pi+ kit.


 


Deploying Azure Functions via GitHub Actions without Publish Profile
Justin Yoo


This post shows how to deploy Azure Functions app via GitHub Actions, without having to know the publish profile.


 


Building a Video Chat App, Part 3 – Displaying Video | LINQ to Fail
Aaron Powell


We’ve got access to the camera, now to display the feed


 


Azure Thursday – 5 November 2020
Henk Boelman


View the schedule on: https://www.azurethursday.com


 


Quick tip: download user or group profile picture using Microsoft Graph JS SDK
Waldek Mastykarz


When building your application on Microsoft 365 you might need to download the profile picture of a user or group. Here is how to do it using the Microsoft Graph SDK.


 


Evénements en ligne : Intelligence Artificielle, Machine Learning et Azure
Maud Levy


Pour le mois de novembre, Microsoft propose plusieurs conférences en ligne en français autour du Mach…


 


Dataminds Connect – Taking Models To The Next Level With Azure Machine Learning
Henk Boelman


 


 



 


C# Corner Azure Learning and Microsoft Certification – AMA ft. Thomas Maurer
Thomas Maurer


Last week I had the honor to be a guest in the C# Corner Live AMA (Ask Me Anything) about Azure Learning and Microsoft Certification.


 

Small Basic Forum Migrated to Small Basic on Q&A

Small Basic Forum Migrated to Small Basic on Q&A

This article is contributed. See the original author and article here.

Small Basic Forum has been archived on November 5, 2020.  Prior to this, Small Basic on Q&A started on October 10, 2020.

 

Small Basic Forum

Small Basic Forum started on October 24, 2008 and has been a platform for our community for 12 years.

 

SmallBasicForumClosed.png

 

Small Basic on Q&A

Small Basic on Q&A is a new platform for our community.  Following four tags are prepared for Small Basic community.

  • small-basic-general – Small Basic general questions.
  • small-basic-community-challenges – Post and reply with community challenges, to experiment with new projects and learn more in the process.
  • small-basic-featured-program – Post a program that you’d like featured on the SmallBasic.com site and blog.
  • small-basic-discussion – Post any general discussion topics or any questions that are meant to spark discussion (where you’re not trying to get an answer).

 

SmallBasicOnQnA.png

 

Getting Start

The first step to join Small Basic on Q&A is to create your account for Microsoft Q&A.  And please check following articles before your post.