Meet Jenny Lay-Flurrie, Chief Accessibility Officer at Microsoft

Meet Jenny Lay-Flurrie, Chief Accessibility Officer at Microsoft

When it comes to assistive technologies the person leading the way for Microsoft is their Chief Accessibility Officer Jenny Lay-Flurrie. She’s from Birmingham, England, is profoundly deaf, works in Seattle, Washington, USA, and is passionate about the importance of putting inclusion at the heart of corporate culture. This is no small undertaking as it requires a paradigm shift in corporate thinking. But Jenny has never shied away from a fight. This October 1, 2020 interview may be giving you a first look at this amazing woman. Here’s a more in-depth look into her: https://news.microsoft.com/stories/people/jenny-lay-flurrie.html.

Large-scale Data Analytics with Azure Synapse – Workspaces with CLI

Large-scale Data Analytics with Azure Synapse – Workspaces with CLI

This article is contributed. See the original author and article here.

One of the challenges of large scale data analysis is being able to get the value from data with least effort. Doing that often involves multiple stages: provisioning infrastructure, accessing or moving data, transforming or filtering data, analyzing and learning from data, automating the data pipelines, connecting with other services that provide input or consume the output data, and more. There are quite a few tools available to solve these questions, but it’s usually difficult to have them all in one place and easily connected.

 

If this article was helpful or interesting to you, follow @lenadroid on Twitter.

 

Introduction

This is the first article in this series, which will cover what Azure Synapse is and how to start using it with Azure CLI. Make sure your Azure CLI is installed and up-to-date, and add a synapse extension if necessary:

$ az extension add --name synapse

 

What is Azure Synapse?
In Azure, we have Synapse Analytics service, which aims to provide managed support for distributed data analysis workloads with less friction. If you’re coming from GCP or AWS background, Azure Synapse alternatives in other clouds are products like BigQuery or Redshift. Azure Synapse is currently in public preview.

 

Serverless and provisioned capacity
In the world of large-scale data processing and analytics, things like autoscale clusters and pay-for-what-you-use has become a must-have. In Azure Synapse, you can choose between serverless and provisioned capacity, depending on whether you need to be flexible and adjust to bursts, or have a predictable resource load.

 

Native Apache Spark support
Apache Spark has demonstrated its power in data processing for both batch and real-time streaming models. It offers a great Python and Scala/Java support for data operations at large scale. Azure Synapse provides built-in support for data analytics using Apache Spark. It’s possible to create an Apache Spark pool, upload Spark jobs, or create Spark notebooks for experimenting with the data.

 

SQL support
In addition to Apache Spark support, Azure Synapse has excellent support for data analytics with SQL.

 

Other features
Azure Synapse provides smooth integration with Azure Machine Learning and Spark ML. It enables convenient data ingestion and export using Azure Data Factory, which connects with many Azure and independent data input and output sources. Data can be effectively visualized with PowerBI.

At Microsoft Build 2020, Satya Nadella announced Synapse Link functionality that will help get insights from real-time transactional data stored in operational databases (e.g. Cosmos DB) with a single click, without the need to manage data movement.

 

Get started with Azure Synapse Workspaces using Azure CLI

Prepare the necessary environment variables:

$ StorageAccountName='<come up with a name for your storage account>'
$ ResourceGroup='<come up with a name for your resource group>'
$ Region='<come up with a name of the region, e.g. eastus>'
$ FileShareName='<come up with a name of the storage file share>'
$ SynapseWorkspaceName='<come up with a name for Synapse Workspace>'
$ SqlUser='<come up with a username>'
$ SqlPassword='<come up with a secure password>'

Create a resource group as a container for your resources:

$ az group create --name $ResourceGroup --location $Region

Create a Data Lake storage account:

$ az storage account create 
  --name $StorageAccountName 
  --resource-group $ResourceGroup 
  --location $Region 
  --sku Standard_GRS 
  --kind StorageV2

The output of this command will be similar to:

{- Finished ..
  "accessTier": "Hot",
  "creationTime": "2020-05-19T01:32:42.434045+00:00",
  "customDomain": null,
  "enableAzureFilesAadIntegration": null,
  "enableHttpsTrafficOnly": false,
  "encryption": {
    "keySource": "Microsoft.Storage",
    "keyVaultProperties": null,
    "services": {
      "blob": {
        "enabled": true,
        "lastEnabledTime": "2020-05-19T01:32:42.496550+00:00"
      },
      "file": {
        "enabled": true,
        "lastEnabledTime": "2020-05-19T01:32:42.496550+00:00"
      },
      "queue": null,
      "table": null
    }
  },
  "failoverInProgress": null,
  "geoReplicationStats": null,
  "id": "/subscriptions/<subscription-id>/resourceGroups/Synapse-test/providers/Microsoft.Storage/storageAccounts/<storage-account-name>",
  "identity": null,
  "isHnsEnabled": null,
  "kind": "StorageV2",
  "lastGeoFailoverTime": null,
  "location": "eastus",
  "name": "<storage-account-name>",
  "networkRuleSet": {
    "bypass": "AzureServices",
    "defaultAction": "Allow",
    "ipRules": [],
    "virtualNetworkRules": []
  },
  "primaryEndpoints": {
    "blob": "https://<storage-account-name>.blob.core.windows.net/",
    "dfs": "https://<storage-account-name>.dfs.core.windows.net/",
    "file": "https://<storage-account-name>.file.core.windows.net/",
    "queue": "https://<storage-account-name>.queue.core.windows.net/",
    "table": "https://<storage-account-name>.table.core.windows.net/",
    "web": "https://<storage-account-name>.z13.web.core.windows.net/"
  },
  "primaryLocation": "eastus",
  "provisioningState": "Succeeded",
  "resourceGroup": "<resource-group-name>",
  "secondaryEndpoints": null,
  "secondaryLocation": "westus",
  "sku": {
    "capabilities": null,
    "kind": null,
    "locations": null,
    "name": "Standard_GRS",
    "resourceType": null,
    "restrictions": null,
    "tier": "Standard"
  },
  "statusOfPrimary": "available",
  "statusOfSecondary": "available",
  "tags": {},
  "type": "Microsoft.Storage/storageAccounts"
}

Retrieve the storage account key:

$ StorageAccountKey=$(az storage account keys list 
  --account-name $StorageAccountName 
  | jq -r '.[0] | .value')

Retrieve Storage Endpoint URL:

$ StorageEndpointUrl=$(az storage account show 
  --name $StorageAccountName 
  --resource-group $ResourceGroup 
  | jq -r '.primaryEndpoints | .dfs')

You can always check what your storage account key and endpoint are by looking at them, if you’d like:

$ echo "Storage Account Key: $StorageAccountKey"
$ echo "Storage Endpoint URL: $StorageEndpointUrl"

Create a fileshare:

$ az storage share create 
  --account-name $StorageAccountName 
  --account-key $StorageAccountKey 
  --name $FileShareName

Create a Synapse Workspace:

$ az synapse workspace create 
  --name $SynapseWorkspaceName 
  --resource-group $ResourceGroup 
  --storage-account $StorageAccountName 
  --file-system $FileShareName 
  --sql-admin-login-user $SqlUser 
  --sql-admin-login-password $SqlPassword 
  --location $Region

The output of the command should show the successful creation:

{- Finished ..
  "connectivityEndpoints": {
    "dev": "https://<synapse-workspace-name>.dev.azuresynapse.net",
    "sql": "<synapse-workspace-name>.sql.azuresynapse.net",
    "sqlOnDemand": "<synapse-workspace-name>-ondemand.sql.azuresynapse.net",
    "web": "https://web.azuresynapse.net?workspace=%2fsubscriptions%<subscription-id>%2fresourceGroups%2fS<resource-group-name>%2fproviders%2fMicrosoft.Synapse%2fworkspaces%<synapse-workspace-name>"
  },
  "defaultDataLakeStorage": {
    "accountUrl": "https://<storage-account-name>.dfs.core.windows.net",
    "filesystem": "<file-share-name>"
  },
  "id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Synapse/workspaces/<synapse-workspace-name>",
  "identity": {
    "principalId": "<principal-id>",
    "tenantId": "<tenant-id>",
    "type": "SystemAssigned"
  },
  "location": "eastus",
  "managedResourceGroupName": "<managed-tesource-group-id>",
  "name": "<synapse-workspace-name>",
  "provisioningState": "Succeeded",
  "resourceGroup": "<resource-group-name>",
  "sqlAdministratorLogin": "<admin-login>",
  "sqlAdministratorLoginPassword": <admin-password>,
  "tags": null,
  "type": "Microsoft.Synapse/workspaces",
  "virtualNetworkProfile": null
}

After you successfully created these resources, you should be able to go to Azure Portal, and navigate to the resource called $SynapseWorkspaceName within $ResourceGroup resource group. You should see a similar page:

lenadroid_0-1599094247688.png

 

What’s next?

You can now load data and experiment with it in Synapse Data Studio, create Spark or SQL pools and run analytics queries, connect to PowerBI and visualize your data, and many more.

 

Stay tuned for next articles to learn more! Thanks for reading!

 

If this article was interesting to you, follow @lenadroid on Twitter.

Secure isolation guidance for Azure and Azure Government

Secure isolation guidance for Azure and Azure Government

This article is contributed. See the original author and article here.

One of the most common concerns for public sector cloud adoption is secure isolation among tenants when multiple customer applications and data are stored on the same physical hardware, as described in our recent blog post on secure isolation.  To provide customers with more detailed information about isolation in a multi-tenant cloud, Microsoft has published Azure guidance for secure isolation, which provides technical guidance to address common security and isolation concerns pertinent to cloud adoption.  It also explores design principles and technologies available in Azure and Azure Government to help customers achieve their secure isolation objectives.  The approach relies on isolation enforcement across compute, storage, and networking, as well as built-in user access control via Azure Active Directory and Microsoft’s internal use of security assurance processes and practices to correctly develop logically isolated cloud services. Read more on our Azure Gov blog here

 

 

About the Author 

 

steve v.jpeg

 

As Principal Program Manager with Azure Government Engineering, @StevanVidich  is focused on Azure security and compliance. He publishes and maintains Azure Government documentation and works on expanding Azure compliance coverage.

 

Experiencing Data Ingestion Latency Issue in Azure portal for Log Analytics – 09/02 – Investigating

This article is contributed. See the original author and article here.

Initial Update: Wednesday, 02 September 2020 16:36 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience intermittent Data Latency and incorrect alert activation for Heartbeat, Perf and SecurityEvent in East US region.

  • Work Around: None
  • Next Update: Before 09/02 20:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Saika


How To: Create a Windows Server 2019 CORE image for Microsoft Azure

How To: Create a Windows Server 2019 CORE image for Microsoft Azure

This article is contributed. See the original author and article here.

I was asked to make up a short demo video about creating a custom image for deployment in Microsoft Azure the other week to support some new content going into Microsoft Learn. Since this involves making an on-prem virtual machine first and then preparing it to upload into Azure – I figured I would make a new Windows Server 2019 CORE image (default deployment option) instead of a full desktop. I always found it strange you didn’t have an option to deploy a Core server deployment of Windows Server from Azure Marketplace.

 

Time to fix that.

 

Why Windows Server Core? Windows Server Core implementations are:

  • Smaller in disk footprint, so potentially cheaper to run in an Azure VM
  • Smaller in attack surface, since fewer binaries and fewer services running inside the VM
  • Less demand on resources to run workloads, so potentially cheaper to run in an Azure VM
  • More “remote friendly” then earlier implementations with management tools, remote PowerShell, remote RDP
  • Runs most workloads you might want to run on-prem or in Azure.

I thought this was going to be a simple process that was documented in a single doc. Little did I know that the info I needed was spread across three different official docs as well as some good old trial and error.  To save you time – I’ve pulled everything together and have the main steps here, but include links back to the source documents in case you want more detailed information.

 

The TL;DR of this process is the following:

  1. Build a Hyper-V VM image of Windows Server 2019 with a Core interface.
  2. Configure specific settings unique to uploading a Custom VM Image for Azure
  3. Generalize your local VM image and shut it down
  4. Upload VHD into a new Azure Managed Disk in your Azure Subscription
  5. Create a VM Image for deployment using the Azure Managed Disk
  6. Deploy a new VM using the uploaded image

From there – you can make a new VM from that custom uploaded image. Management wise – the deployed image is compatible with Azure Bastion, Just-In-Time remote access, Remote PowerShell or PowerShell commands via the Azure Portal.

 

Lets get started!

 

Build a Hyper-V VM image of Windows Server 2019 with a Core interface.

This should be self-explanatory. You have a Server that is running Hyper-V and you can make a new Windows Server 2019 VM using the install media (ISO file) for Windows Server 2019. The default install experience is to have a CORE install (i.e. no desktop experience) and you create a new password for the local administrator account to further configure the system. To keep things simple – I created a Generation 1 VM initially to do the install and for the most part kept the defaults for the base creation process.

CoreLogonScreen.jpg

 

I don’t know what it is, but I really like the simple logon for a Windows Server core box – if I have to logon to the console at all. I need to do some tasks from the Hyper-V host before customizing the local VM – so I’ll shut it down for now.

 

Configure specific settings unique to uploading a Custom VM Image for Azure

For this example, I am taking this base image as is, and doing the recommended configuration changes as per the “Prepare a Windows VHD or VHDX to upload to Azure”. These include:

  • If you made your VM from the Hyper-V Create VM Wizard, you probably have a Generation 1 VM with a dynamically expanding VHDX file. You NEED to convert this to a VHD file and change from a Dynamically expanding file to a FIXED hard drive size. Keep things simple and use the GUI console to do this – or you can follow the instructions in the document referenced above to go the PowerShell route.
    • With the VM shutdown, edit the VM settings and select the Hard Disk. Choose the EDIT button to manage the disk.
    • Select Convert to convert the disk. Select VHD for a max size of 2 TB, but we’re going to go smaller here.
    • Select Fixed Size and choose the appropriate size (I went with 126 GB)
    • Create a new name for the VHD as it makes a COPY of the disk.
  • Because you change the disk from Dynamic to Fixed and it’s a new disk – you need to edit the settings of the VM to reference THIS new fixed size disk in order to proceed. Once this is updated – boot the machine and logon as the local administrator account
  • From the command prompt – start up a PowerShell prompt to continue to prep this VM
  • Run the System File Checker utility

Sfc.exe /scannow

  • Run and Install all windows updates, I find it’s easies to use SCONFIG to setup Windows Update to run Automatically and check for updates.

WindowsUpdates.jpg

 

  • I can force an update check with option 6 and In this case – I had three downloads / updates I needed to process which included a reboot.

UpdateProcess.jpg

 

At this point the document goes through an extensive list of checks and settings you should review and implement in your base image in order to ensure a smooth deployment. I am not going to list them all off here – but refer you to the document to follow:

Note: You will get some errors based on if your image is domain joined or if there are group policies in place. I got a number of red error dumps from PowerShell commands, but they were expected since my VM is not domain joined.

 

OK – we’re ready to go, no turning back now.

 

Generalize your local VM image and shut it down

You have prepared your machine, set it up for optimal Azure compatibility and you have it tested for remote connectivity. Time to Generalize it with good old sysprep.exe. Logon to the box and change to the c:windows folder.  You can save a bit of space (or a lot of space if this image was an upgrade) by deleting the c:windowspanther directory. Once that’s done, change into c:windowssystem32sysprep folder and then run sysprep.exe.

Make sure you check the Generalize checkbox and choose to Shutdown instead of Reboot.

sysprep.jpg

 

OK – you are all set for an UPLOAD to Azure now.

 

Upload VHD into a new Azure Managed Disk in your Azure Subscription

NOTE: I only ever use Managed Disks for my virtual machines now, since it saves me from having to architect a strategy around how many VM disks can be in each storage account before maxing out my throughput OR having issues with storage cluster failures… Just keep it simple and promise me you will always use Azure Managed Disks for your VMs.

 

You will already need to have a ResourceGroup in azure that you can store these VM images in and you will want to define the location for the storage group to be in the same area where you will be using this image. I assume you are using the same system where the VHD is located OR you have copied it to your admin workstation locally before uploading it.

 

On this system – you will need to ensure you have the latest version of AzCopy v10 installed and the Azure PowerShell modules installed. We’re following the procedures outlined in the “Upload a VHD to Azure” document.

 

To upload the image – you first have to create an empty standard HDD managed disk in your pre-created ResourceGroup that is the same size as your soon to be uploaded VHD. These example commands will get your VHD disk size and set the configuration parameters required for making a disk. In order for this to work, you will need to replace <fullVHDFilePath>, <yourdiskname>, <yourresourcegroupname>, and <yourregion> from the example below with your information.

 

$vhdSizeBytes = (Get-Item "<fullVHDFilePath>").length

$diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -UploadSizeInBytes $vhdSizeBytes -Location '<yourregion>' -CreateOption 'Upload'

New-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -Disk $diskconfig

 

 

In my example, the complete commands were:

 

$vhdSizeBytes = (Get-Item "C:vmsContosoVM2.vhd").length

$diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -UploadSizeInBytes $vhdSizeBytes -Location 'eastus' -CreateOption 'Upload'

New-AzDisk -ResourceGroupName ContosoResourceGroup -DiskName ContosoVM2 -Disk $diskconfig

 

 

Next You need to grant SAS access to the empty disk

 

$diskSas = Grant-AzDiskAccess -ResourceGroupName ContosoResourceGroup -DiskName ContosoVM2 -DurationInSecond 86400 -Access 'Write'

$disk = Get-AzDisk -ResourceGroupName ContosoResourceGroup -DiskName ContosoVM2

 

 

Now Upload the local VHD file to the Azure Managed Disk. Don’t forget to replace the <fullVHDFilePath> with your local VHD filePath

 

AzCopy.exe copy "<fullVHDFilePath>" $diskSas.AccessSAS --blob-type PageBlob

 

 

Once the AzCopy command completes, you need to revoke the SAS access in order to change the state of the manage disk and enable the disk to function as an image for deployment.

 

Revoke-AzDiskAccess -ResourceGroupName ContosoResourceGroup -DiskName ContosoVM2

 

 

Create a VM Image for deployment using the Azure Managed Disk

OK – final stretch. You’ve made a Windows Server 2019 Core image locally, prepared it for use in Azure, generalized it and uploaded it into you Azure subscription  as a Managed Disk. Now you have to identify that managed disk as a VM Image that can be deployed. We’re following our third document on this called “Upload a generalized VHD and use it to create new VMs in Azure”.

  • You need to get the information about the Managed Disk you just created. In my case it’s in the ContosoResourceGroup and has a name of ContosoVM2image. The command to run and build the variable is:

 

$disk = Get-AzDisk -ResourceGroupName ContosoResourceGroup -DiskName ContosoVM2

 

 

  • Set some more variables including location of where you will be using the image, what image name is and in what resource group does it reside. In my case I used the following:

 

$location = 'East US'
$imageName = 'ContosoVM2Image'
$rgName = 'ContosoResourceGroup'

 

 

  • Now Create the image configuration

 

$imageConfig = New-AzImageConfig -Location $location
$imageConfig = Set-AzImageOsDisk -Image $imageConfig -OsState Generalized -OsType Windows -ManagedDiskId $disk.Id

 

 

  • FINALLY – create the image object in your subscription for deployment from portal, powershell, AzureCLI or Azure Resource Manager template.  

 

$image = New-AzImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig

 

 

And with that – we are finally DONE.

 

If you open up the Azure portal and explore what is in that resource group where you uploaded the VHD – you should see something similar to what I see in this portal screenshot: a simple VHD uploaded and an Image definition that you can use to deploy new VMs.

 

PortalImage.jpg

In this blog post, the custom local VM that was created was a Windows Server 2019 core install server that was customized, generalized, uploaded and converted into a specialized Image for use in Azure. Because I took the time to build my own custom image and upload it into my Azure subscription – I can deploy as many Windows Server 2019 core boxes as I need for my projects now.

Change the Elastic Pool Storage Size using Azure Monitor and Azure Automation

Change the Elastic Pool Storage Size using Azure Monitor and Azure Automation

This article is contributed. See the original author and article here.

Introduction to Elastic Pool:
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool

In this article, we will setup an Azure Monitor Alert to Scale up the Storage limit of an SQL Elastic Pool on Azure. Please read more about Elastic Pool in the above article.

We will divide this into three parts

i.   Setting up an Automation Runbook and Webhook
ii.  Setting up an Alert Action Group
iii. Setting up an Alert under Azure Monitor.

We will not talk much about Azure Automation or Azure Monitor as they are off the topic, we will only cover the steps for setting up of this Auto scale of storage. Here are some of the articles that should bring you up to Speed.

Create Azure Automation Account [ In this case, we would need to use RunAsAccount]

https://docs.microsoft.com/en-us/azure/automation/automation-create-standalone-account

Using Az modules in Azure Automation Account

https://docs.microsoft.com/en-us/azure/automation/az-modules

Azure Monitor

https://docs.microsoft.com/en-us/azure/azure-monitor/

Azure Monitor Overview

https://docs.microsoft.com/en-us/azure/azure-monitor/overview

Monitoring Azure Service

https://docs.microsoft.com/en-us/azure/azure-monitor/insights/monitor-azure-resource

 

Setting Up Automation Modules

By default, you cannot run both Az and Rm Modules on the Automation and this is explained here:

https://docs.microsoft.com/en-us/azure/automation/az-modules

So, we will import the Az Modules to the Automation Account and not Rm Modules. By default, when you create an Automation Account, there is a bunch of modules imported and we will not touch them as we will use the AZ modules.

Here is what you need to do.

 

  1. Go to Azure Automation Account.
  2. Click on Modules under Shared resource.

    ShashankaHaritsa_0-1599019457430.png 

  3. Click on Browse gallery.

    ShashankaHaritsa_1-1599019483844.png
  4. Search for Az.Accounts and Click on Import.

    ShashankaHaritsa_2-1599019504166.png

  5. Likewise, search of Az.sql and import it too [ once the Az.Accounts import is complete. Otherwise, it may fail].
  6. Let the modules get imported.
  7. Once the modules are imported, you would see the status as available..

    ShashankaHaritsa_5-1599019601058.png
  8. Further, you don’t need to add any modules as we will use only SQL related Cmdlets unless you are using this Automation Account for other purposes.
  9. Next, we will need to setup an Automation Account Runbook, for that Navigate to Runbooks under Process Automation under Automation Account.

    ShashankaHaritsa_6-1599019635338.png

  10. Click on Create Runbook and Provide the details as below, Click Ok

    ShashankaHaritsa_7-1599019652236.png

  11. In the Runbook Edit section, copy paste the following script:

    #Author: Shashanka Haritsa
    #Date: 19th March 2020
    <#WARNING: The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind.
    Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no
    event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. #>

    #Read Webhook data
    param
    (
    [Parameter (Mandatory = $false)]
    [object] $WebhookData
    )
    # If runbook was called from Webhook, WebhookData will not be null.
    if ($WebhookData) {
    #Authenticate to Azure First using an RunAsAccount
    $getrunasaccount= Get-AutomationConnection -Name ‘AzureRunAsConnection’
    Add-AzAccount -ServicePrincipal -ApplicationId $getrunasaccount.ApplicationId -CertificateThumbprint
    $getrunasaccount.CertificateThumbprint -Tenant $getrunasaccount.TenantId
    # Authentication Complete
    $WebhookData.Requestbody
    $Converteddata = $WebhookData.Requestbody | ConvertFrom-Json
    $resourcegroupname = $converteddata.data.context.resourceGroupName
    $resourceName = $converteddata.data.context.resourceName
    $getservername=(($converteddata.data.context.resourceId) -split(‘/’))[8]
    #Read ElasticPools Current storage and double it
    $GetElasticPoolStorage=(Get-AzSqlElasticPool -ElasticPoolName $resourceName -ResourceGroupName
    $resourcegroupname -ServerName $getservername).StorageMB
    $GetElasticPoolStorage
    $NewStorage = ($GetElasticPoolStorage *2) #I am just Increasing my storage to 100% more for my Standard
    Plan so I am multiplying the storage by 2, you may need to change this according to your requirement
    #Set the new storare limit
    Set-AzSqlElasticPool -ElasticPoolName $resourceName -ResourceGroupName $resourcegroupname –
    StorageMB $NewStorage -ServerName $getservername
    }
    Else{
    Write-output “No Webhookdata found. Exiting”
    }

  12. Click on save and Click on Publish.
  13. Now, we will need to create a Webhook. Under the same Runbook, Click on Webhooks

    ShashankaHaritsa_0-1599019752455.png

  14. Click on Add Webhook.

    ShashankaHaritsa_1-1599019814317.png

  15. Under Create Webhook, give it a name and copy the URL to a safe place from where you can retrieve it in future. [ NOTE: This URL cannot be retrieved after creation, so please keep it safe] Click ok and Click on Create.

    ShashankaHaritsa_2-1599019883293.png

  16. Once the Webhook is created, you will see that under the Webhooks section.

 

This Completes the first part where we have created the Automation Runbook, setup modules and a Webhook.

 

Setting up an Alert Action Group

 

In this section, we will create an Action Group that we will use with an Alert.

Please follow the steps below to create an Action Group

  1. Login into Azure Portal [ If you haven’t already]
  2. Navigate to Azure Monitor →Alerts and Click on Manage actions

    ShashankaHaritsa_3-1599020167604.png

  3. Next, click on Add action group and fill in the information as needed.
  4. Under the Action Name, provide a name as desired and under Action Type, select Webhook

    ShashankaHaritsa_4-1599020227968.png

  5. A Webhook URI screen pops up on the right-hand side, please use the Webhook URL we had copied during the Webhook creation under the Automation Account and click ok.

    ShashankaHaritsa_5-1599020260522.png

  6. Click OK again on the Add action group screen. This will create an action group.

This completes the creation of Action Group.

 

Setting up an Alert under Azure Monitor

In this part, we will create an Alert that will trigger our Runbook whenever the used space is greater than some value. Please follow the steps below.

 

  1. Navigate to Azure Monitor
  2. Click on Alerts and Click on New alert rule
  3. Under the resource, click on Select

    ShashankaHaritsa_6-1599020423921.png

     

  4. Filter the Subscription and Resource type as SQL elastic pools and location, select the Elastic Pool of Interest. This should populate the resource as below.

    ShashankaHaritsa_8-1599020541692.png

     

  5. Now, click on add under condition. Select Signal type as Metrics and Monitor Service as Platform
  6. Select the Signal name of interest, in this case we will select Data space used percent

    ShashankaHaritsa_9-1599020575675.png

     

  7. Once you select the Metric, you will now need to add alert logic, lets say that you would like to trigger an alert when the Percentage used space is 70 [ Average] for last 1 hour, we will set it up as below:

    ShashankaHaritsa_10-1599020611906.png

     

    What does it mean? We are checking the Average Data space used Percentage for last one hour and we will evaluate this condition every 5 minutes as a part of Alert.

  8. Click on done and now click on Add under ACTIONS GROUPS and select the one you created during the action group creation.
  9. Now provide Alert details and a Description. Select Severity of Interest. Once you are happy with the details provided, click Create alert rule

That covers all the three configurations involved. Whenever the data space used percentage on the Elastic Pool increases over 70%, an alert will be triggered, and the Runbook invoked through Webhook will resize the storage on the Elastic Pool.

 

IMPORTANT NOTE:

  • This above sample document is for reference purpose only and is provided AS IS without warranty of any kind.
  • The author is not responsible for any damage or impact on the production, the entire risk arising out of the use or performance of the above sample document remains with you
  • For the Script section, under Automation Runbook setup, we have taken Standard plan[ Elastic Pool] in account and have only doubled the storage based on our requirement, if your requirement is different, you should evaluate the logic for increasing the storage and then amend the script as necessary.