How to set up new password for the cluster certificate to connect to SFC in the VSTS Pipeline

How to set up new password for the cluster certificate to connect to SFC in the VSTS Pipeline

This article is contributed. See the original author and article here.

How to set up new password for the cluster certificate to connect to Service Fabric Cluster in the VSTS Pipeline


 


This article helps you to set up new password for the cluster certificate which one can use in release pipeline to deploy your application to SF cluster.


 


Scenario : Adding the base-64 encoding of the client certificate file which is NOT PASSWORD protected when setting up the “New Service Fabric Connection” in the release pipeline will lead to deployment failure.


 


Below is the sample of the error:


“2020-10-15T20:58:45.3232533Z ##[debug]System.Management.Automation.RuntimeException: An error occurred attempting to import the certificate. Ensure that your service endpoint is configured properly with a correct certificate value and, if the certificate is password-protected, a valid password. Error message: Exception calling ‘Import’ with ‘3’ argument(s): ‘The specified network password is not correct.”


 


Steps to set new password for Cluster certificate:



  1. Download the relevant cluster certificate from the Key vault to local machine. 


AzurePortal- > Key Vaults Resource -> Certificate- > Select the cluster certificate.


reshmav_0-1604307733836.png


 



  1. Install the certificate to local machine store with marking key as exportable. 


reshmav_1-1604307733854.png


 



  1. To set up new password, follow below PowerShell Script:


         a. # Retrieve the Certificate object from the certificate store
$SelfSignedCert = Get-ChildItem Cert:LocalMachineMy -DnsName “<clustername>.<clusteregion>.cloudapp.azure.com”


Note: Now the Client/ Cluster certificate is password protected, one can convert into base-64 encode(Step 4) to use in the Release pipeline


 



  1. Convert the certificate into base-64 encoded representation of the certificate using PowerShell. 


[System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes(“C:TempSelfSignedCert.pfx”))


 


Please refer to below article to Deploy an application with CI/CD to a Service Fabric cluster:


Reference: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/service-fabric/service-fabric-tutorial-deploy-app-with-cicd-vsts.md#create-a-release-pipeline
 

Using Runbooks to set Azure Alerts Status

Using Runbooks to set Azure Alerts Status

This article is contributed. See the original author and article here.

 


Background/Scenario


Azure Alerts can be used to proactively notify you when important conditions are found in your monitoring data.  After setting up either metric alerts or log alerts for your workloads, specifically IaaS workloads, there may be times when you need to disable those alerts during a maintenance window. 


 


Depending on the size of your environment and the number of alerts you’ve created, it might be quite a chore to go through each one to disable/enable.


The following will demonstrate how to setup an Azure Automation Runbook to quickly set the status of our IaaS Alerts to either Enabled or Disabled via a webhook.  The webhook will allow us to execute the Azure Automation Runbook from anywhere, like an on-premises workstation, to set the alert status.  The runbook will also take advantage of Azure Resource Graph as a mechanism to search for alerts across all of the available subscriptions.


 


Requirements



 


Configuration/Setup


 


Step 1: Create a metric alert(s) for your IaaS Server(s) based on CPU Usage


If you already have an alert(s) defined with the server name in the alert rule, skip this step.



  1. Navigate to Alerts

  2. New alert rule

  3. Select resource

    • Select a virtual machine




RobertLightner_0-1604163327289.png


 



  1. Select a condition based on Percentage CPU


RobertLightner_1-1604163327295.png


 



  • Set the threshold value and leave the other options with their default value


RobertLightner_2-1604163327301.png


 



  1. Select or create an Action Group (required)

  2. Fill in the remaining Alert rule details and include the server name in the Alert rule name


RobertLightner_3-1604163327303.png


 


 


(For Step 2, chose either 2a or 2b for creating/deploying an Automation Account)


Step 2a: Create an Automation Account – ARM Template Method



  1. Deploying this ARM template [GitHub] will include the following:

    1. Azure Automation Account

    2. Import of PowerShell Modules (Az.Accounts, Az.Monitor, Az.ResourceGraph)

    3. Runbook (SetAzAlertsStatus-Webhook)

    4. Creation of the Automation Run As account is not supported when you’re using an ARM template.



  2. Create a Run As account in Azure portal

    1. Grant the run as account, at a minimum, the ability to manage Alerts. By default, the AAA run as account is granted contributor rights at the subscription it’s deployed into. In production, granting access to the AAA run as account at a Management Group is recommended.




 


Step 2b: Create an Automation Account – Manually Method



  1. Create an Azure Automation Account (AAA)

  2. Grant the AAA run as account, at a minimum, the ability to manage Alerts. By default, the AAA run as account is granted contributor rights at the subscription it’s deployed into. In production, granting access to the AAA run as account at a Management Group is recommended.

  3. Import PowerShell Gallery modules (Az.Accounts, Az.Monitor, Az.ResourceGraph) into the AAA

    1. Under Shared Resources, select Modules.

    2. Select Browse gallery, and then search the Gallery for a module.

    3. Select the module to import, and select Import.

    4. Select OK to start the import process.



  4. Create an Azure Automation runbook (PowerShell Runbook)

    1. In Create an Azure Automation runbook article, step #6, copy SetAzAlertsStatus-Webhook.ps1 from GitHub and paste it into the runbook.




 


Step 3: Create a Webhook for your Runbook



  1. Create a webhook for your Runbook.

    1. From the Runbooks page in the Azure portal, click the runbook that the webhook starts to view the runbook details. Ensure that the runbook Status field is set to Published.

    2. Click Webhook at the top of the page to open the Add Webhook page.

    3. Click Create new webhook to open the Create Webhook page.

    4. Fill in the Name and Expiration Date fields for the webhook and specify if it should be enabled. See Webhook properties for more information about these properties.

    5. Click the copy icon or press Ctrl+C to copy the URL of the webhook. Then record it in a safe place.
      RobertLightner_4-1604163327305.png

       



      1.       Please save your webhook URL. Once you create the webhook, you cannot retrieve the URL again.



    6. Click Parameters, leave it blank, press OK.
      RobertLightner_5-1604163327306.png

       



    7. Click Create to create the webhook.




 


Step 4: Test your Automation Account Runbook via webhook



  1. Download the PowerShell script SetAzAlertsStatus-Webhook-Wrapper.ps1 and save it to your computer.

  2. Edit the script and update line 32 with your webhook URL:

    1. $uri = “<runbook webhook URL you saved earlier>”



  3. Execute the PowerShell script from your local computer.


RobertLightner_6-1604163327308.png


 


 


Conclusion


With an Alert naming convention that includes your server name, this method works very well for quickly enabling or disabling Azure alerts.


I hope you have found this article helpful and thank you for taking the time to read this post.


 


References



 


Disclaimer


The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

CloudEvents APIs and Azure Event Grid

CloudEvents APIs and Azure Event Grid

This article is contributed. See the original author and article here.

[As of October 31, 2020]


Original publication is in medium.


https://logico-jp.medium.com/use-cloudevents-apis-to-interact-with-azure-event-grid-32dc63518af3


 


Japanese edition is listed below.


https://logico-jp.io/2020/09/06/use-cloudevents-schema-in-azure-event-grid/
https://logico-jp.io/2020/10/23/tips-for-using-event-grid-sdk-to-handle-cloudevents/
https://logico-jp.io/2020/10/30/using-cloudevents-apis-to-post-events-to-azure-event-grid/
https://logico-jp.io/2020/10/31/using-cloudevents-apis-to-create-an-application-which-subscribe-an-azure-event-grid-topic/


 


Introduction


Azure Event Grid supports CloudEvents 1.0. And Azure Event Grid client library also supports sending/receiving events in the form of CloudEvents.


 


Use CloudEvents v1.0 schema with Event Grid



Introducing the new Azure Event Grid Client Libraries with CloudEvents v1.0 Support


https://devblogs.microsoft.com/azure-sdk/event-grid-client-libraries/


 


If Azure Event Grid is the only system which consumes and posts cloud events in your environment, Azure Event Grid SDK should be chosen. However, if several systems which consume and post cloud events have already existed in your environment and you plan to introduce Azure Event Grid, you would look for ways to interact with Azure Event Grid using industry standard APIs. In this article, I describe how to interact with Azure Event Grid using CloudEvents APIs.


 


Prerequisites and basic information


 


What is CloudEvents?


If you are not familiar with CloudEvents, please check the following URL.


 


CloudEvents


https://cloudevents.io/


 


What format does Azure Event Grid support?


As of now, Azure Event Grid supports Structured Content mode only (Binary Content mode is not supported). We have to follow JSON Event Format specification in case of creating events.


 


JSON Event Format for CloudEvents – Version 1.0


https://github.com/cloudevents/spec/blob/v1.0/json-format.md


 


 

What language and SDK is available?


CloudEvents SDKs are provided in several languages. In this article, sample applications are created with Java APIs for CloudEvents. Json EventFormat implementation with Jackson and HTTP Protocol Binding APIs for Jakarta RESTful Web Services allows us to create applications easier than using core APIs.


 


Java SDK for CloudEvents API
https://github.com/cloudevents/sdk-java


 


How do we post events to Azure Event Grid via CloudEvents APIs?


When posting events to Azure Event Grid through CloudEvents APIs, the following URL is helpful.


 


Quickstart: Route custom events to web endpoint with the Azure portal and Event Grid
https://docs.microsoft.com/azure/event-grid/custom-event-quickstart-portal


 


According to this document, we can post events to Azure Event Grid topic with the following URL (Access key or Shared Access Signature is required). We can get the access key in Azure Portal and via Azure CLI.


 


https://{topic-endpoint}?api-version=2018-01-01

 


Shared Access Signature is similar to access access key, but it can be configured with an expiration time. It might be suitable if access restriction to a topic or domain is required. The following URL describes how to create Shared Access Signature.


 


Creating a shared access signature


https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventgrid/azure-messaging-eventgrid/README.md#creating-a-shared-access-signature


 

Send CloudEvents to Azure Event Grid


Logico_jp_0-1604299014956.png

 



In this part, a REST client application which uses CloudEvents APIs is created in order to post events to Azure Event Grid topic. Azure Event Grid Viewer application verifies and shows these events. This viewer application is described in the following URL.


 


Quickstart: Route custom events to web endpoint with the Azure portal and Event Grid
https://docs.microsoft.com/azure/event-grid/custom-event-quickstart-portal


 


With following the document above, you can configure Event Grid topic. No special configuration is required.


 


Dependencies


As this client application requires JAX-RS related modules, the following dependencies should be added to pom.xml.


 

<!-- for CloudEvents API -->
<dependency>
  <groupId>io.cloudevents</groupId>
  <artifactId>cloudevents-http-restful-ws</artifactId>
  <version>2.0.0-milestone3</version>
</dependency>
<dependency>
  <groupId>io.cloudevents</groupId>
  <artifactId>cloudevents-json-jackson</artifactId>
  <version>2.0.0-milestone3</version>
</dependency>

<!-- for JAX-RS -->
<dependency>
  <groupId>org.glassfish.jersey.core</groupId>
  <artifactId>jersey-client</artifactId>
  <version>3.0.0-M6</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.inject</groupId>
  <artifactId>jersey-hk2</artifactId>
  <version>3.0.0-M6</version>
</dependency>
<dependency>
  <groupId>org.glassfish.jersey.media</groupId>
  <artifactId>jersey-media-json-jackson</artifactId>
  <version>3.0.0-M6</version>
</dependency>
<dependency>
  <groupId>org.glassfish</groupId>
  <artifactId>jakarta.json</artifactId>
  <version>2.0.0-RC3</version>
</dependency>
<dependency>
  <groupId>jakarta.ws.rs</groupId>
  <artifactId>jakarta.ws.rs-api</artifactId>
  <version>3.0.0-M1</version>
</dependency>
<dependency>
  <groupId>jakarta.json</groupId>
  <artifactId>jakarta.json-api</artifactId>
  <version>2.0.0-RC3</version>
</dependency>

 


 


Create events using CloudEvents APIs


CloudEvents::v1() method allows us to create events. JSON is used as a format of custom data, and we use the withDataContentType() method to specify application/json as Content-Type.


 

JsonObject jsonObject = Json.createObjectBuilder()
                            .add("message", "Using CloudEvents.io API to send CloudEvents!!")
                            .build();
 
CloudEvent ce = CloudEventBuilder.v1()
        .withId("A234-1234-1234")
        .withType("io.logico-jp.ExampleEventType")
        .withSource(URI.create("io/logico-jp/source"))
        .withTime(OffsetDateTime.now(ZoneId.ofOffset("UTC", ZoneOffset.UTC)))
        .withDataContentType(MediaType.APPLICATION_JSON)
        .withData(jsonObject.toString().getBytes(StandardCharsets.UTF_8))
        .build();

 


 


Serialization


Serialization of created JSON formatted events is required. To do so, we use “Json EventFormat implementation with Jackson” APIs. Steps for creating a client application are completed.


 

EventFormat format =EventFormatProvider
        .getInstance()
        .resolveFormat(JsonFormat.CONTENT_TYPE);
 
byte[] serialized = format.serialize(ce);

 


 


Create REST Client


We can follow typical ways to creating REST client. No special configuration is required. Access key of Event Grid should be set to HTTP Header. Note that not application/json but application/cloudevents+json should be set as Content-Type.




MultivaluedMap<String, Object> headers = new MultivaluedHashMap<>();
headers.add("aeg-sas-key", AEG_KEY);
Response response = ClientBuilder.newClient().target(AEG_ENDPOINT)
        .path("/api/events")
        .queryParam("api-version", "2018-01-01")
        .request("application/cloudevents+json")
        .headers(headers)
        .post(Entity.entity(serialized, "application/cloudevents+json"));




 


Receive CloudEvents through Azure Event Grid


Logico_jp_1-1604299014959.png

 



In this part, a JAX-RS application is created to subscribe the Event Grid topic. As Azure Event Grid send events using webhook, the JAX-RS application requires a POST endpoint to listen events.


Event Grid topic which we use is already configured in the previous section (precisely, the Event Grid topic should have been already configured).


 


Dependencies


In this case, Helidon MP is chosen to create JAX-RS application. Needless to say, you can choose any development framework freely.


 


Helidon Project


https://helidon.io/


 


This application depends on the following modules.


 

!-- for Cloud Event -->
<dependency>
  <groupId>io.cloudevents</groupId>
  <artifactId>cloudevents-http-restful-ws</artifactId>
  <version>2.0.0-milestone3</version>
</dependency>
<dependency>
  <groupId>io.cloudevents</groupId>
  <artifactId>cloudevents-json-jackson</artifactId>
  <version>2.0.0-milestone3</version>
</dependency>

 




 



Create an endpoint


We can create a JAX-RS applications without special configuration. As Azure Event Grid supports Structured Content mode only, event format is JSON. So, the sample application waits for events using JsonObject. AndEventFormat::deserialize() method is used for deserialization of events.




@Path("/updates")
@POST
public Response receiveEvent(Optional<JsonObject> obj) {
        if(obj.isEmpty()) return Response.noContent().status(Response.Status.OK).build();

    EventFormat format = EventFormatProvider
            .getInstance()
            .resolveFormat(JsonFormat.CONTENT_TYPE);

    CloudEvent ce = format.deserialize(obj.get().toString().getBytes(StandardCharsets.UTF_8));
    JsonObject customData = JsonUtil.toJson(new String(ce.getData())).asJsonObject();
    // output to console
    System.out.println("Received JSON String -- " + obj.get().toString());
    System.out.println("Converted to CloudEvent -- " + ce.toString());
    System.out.println("Data in CloudEvent -- " + customData.toString());
    return Response.noContent().status(Response.Status.ACCEPTED).build();
}

 





Configure OPTIONS method for enabling webhook


When configuring Azure Event Grid integration through webhook, subscriber (i.e. this JAX-RS application) has to respond to Azure Event Grid using OPTIONS method.




@Path("/updates")
@OPTIONS
public Response isWebhookEnabled() {
    return Response.ok()
            .allow("GET", "POST", "OPTIONS")
            .header("Webhook-Allowed-Origin","eventgrid.azure.net")
            .build();
}


 



Create Docker container and deploy to Azure App Service


After these steps are completed, we build the JAX-RS application, containerize it, and deploy it on Azure App Service.


 


Test


The following events are posted to Azure Event via the client application.


 


 

[{
  "specversion": "1.0",
  "id": "A234-1234-1234",
  "source": "io/logico-jp/source",
  "type": "io.logico-jp.ExampleEventType",
  "datacontenttype": "application/json",
  "time": "2020-10-31T13:54:34.308619Z",
  "data": {
    "message": "Using CloudEvents.io API to send CloudEvents!!"
  }
},
{
  "specversion": "1.0",
  "id": "A234-1234-1234",
  "source": "io/logico-jp/source",
  "type": "io.logico-jp.ExampleEventType",
  "datacontenttype": "application/json",
  "time": "2020-10-31T13:54:26.082221Z",
  "data": {
    "message": "Using CloudEvents.io API to send CloudEvents!!"
  }
}]

 


 




We can observe each event was successful delivered to each subscription in Azure Portal.



Logico_jp_2-1604299014961.png

 



Azure Event Grid Viewer also shows delivered events.


Logico_jp_3-1604299014963.png

And from JAX-RS application side, we can observe each delivered event in App Service console log. Three logs appears per each event.



Logico_jp_4-1604299014965.png

 



Conclusion


CloudEvents APIs allow us to post structured events to Azure Event Grid, and to handle structured events delivered from Azure Event Grid. CloudEvents APIs support various languages, and especially Java, if you are familiar with JAX-RS and Jackson, you would easily create applications with these APIs.


If Azure Event Grid were the only system which consumes and posts cloud events in your environment, Azure Event Grid SDK would be the best APIs. However, if Azure Event Grid is one of services which consume and post cloud events, industry standard APIs is often more suitable than Azure Event Grid SDK.


 


I hope this article would be helpful for you.

Announcing Project and Roadmap apps for Microsoft Teams

Announcing Project and Roadmap apps for Microsoft Teams

This article is contributed. See the original author and article here.

Picture1.png


 


We are pleased to announce the release of the Project and Roadmap apps in Microsoft Teams. Connecting directly to Project from within Teams has been one of the major requests from Project users, and these apps will make it easy to manage, track, and collaborate on all aspects of a team’s project in one place. This brings content and conversation side-by-side in one integrated experience.


 


Team members can create new projects or roadmaps, or open existing ones, in Microsoft Teams and keep communications within the context of work and collaboration within Office 365. The Project and Roadmap apps can be added as tabs in any channel by selecting the “+” icon at the top of a channel. Anyone who has access to that channel can also access that tab.


 


Microsoft Teams  Microsoft Project


Today, each one of us has become a project manager. To stay on top of the ever-shifting requirements of our jobs, we need tools that are simple yet robust enough to support any requirement, flexible enough to support any project type, and, most importantly, easy enough to collaborate with anyone no matter where they are or what device they are using.


 


The Project app in Teams helps you tackle anything from small projects to large initiatives and is designed for just about any role, skill level, or project type. You can access the features and capabilities, of the Project for the web experience such as the automated scheduling engine to set effort, duration, and resources from inside Teams.


Blogs gif.gif


 


 


Microsoft Teams  Roadmap


If your group runs multiple projects at the same time and needs visibility across all the work being done, Roadmap provides a visual and interactive way to connect these projects and show their status in a transparent way across the organization.


 


The Roadmap – Microsoft Project app will give you a cross-functional, big picture view of the work that is most important to you. You can create a consolidated timeline view of projects from Microsoft Project and Azure Boards and plan larger initiatives across all of them – complete with key dates and milestones – so that all the work is visible. 


create Roadmap.gif


 


Note: All Office 365 users will be able to view Projects/Roadmaps shared within Teams in a read-only mode. Users with appropriate Project for the Web licenses to create and edit Projects/Roadmaps will be able to do the same from within Teams as well. Learn more about Project for the Web licenses here.


 


If you want to learn more, see Use Project or Roadmap in Microsoft Teams. Next, notifications in Teams will be added so that users can see what’s important to them within Project and Roadmap in their team’s activity feed.


 


We love hearing from you. Please tell us how we can improve your Project experience in Teams through our UserVoice site. You can also leave a comment below to engage with us directly to provide feedback.


 


Keep checking our Tech Community site for the latest feature releases and Project news.


 


 


 


 


 


 


 


 

ADF Adds Cached Lookups to Data Flows

This article is contributed. See the original author and article here.

 


ADF has added the ability to now cache your data streams to a sink that writes to a cache instead of a data store, allowing you to implement what ETL tools typically refer to as Cached Lookups or Unconnected Lookups.


 


The ADF Data Flow Lookup Transformation performs a left outer join with a series of options to handle multiple matches and tags rows as lookup found / no lookup found. What the cached lookup enables is a mechanism to store those lookup streams in caches and access them from your expressions.


 


Many powerful use cases are enabled with this new ADF feature where you can now lookup reference data that is stored in cache and referenced via key lookups with different values, multiple times, without the need to specify separate Lookup transformation calls. Now you can simply use a lookup() function to grab additional specific columns as in: lookup().myColumn1.


 


Additionally, you can use the new function outputs() to grab an entire matrix of rows and columns from cache and iterate through an array of rows, picking your specific columns to reference.


 

GitHub Actions, DNS & SSL Certificate on Azure Functions

GitHub Actions, DNS & SSL Certificate on Azure Functions

This article is contributed. See the original author and article here.

Throughout this series, I’m going to show how an Azure Functions instance can map APEX domains, add an SSL certificate and update its public inbound IP address to DNS.


 



 


In my previous post, we walked through how to link an SSL certificate issued by Let’s Encrypt, with a custom APEX domain. Throughout this post, I’m going to discuss how to automatically update the A record of a DNS server when the inbound IP address of the Azure Functions instance is changed, update the SSL certificate through the GitHub Actions workflow.


 



All the GitHub Actions source codes used in this post can be found at this repository.



 


Azure Functions Inbound IP Address


 


If you use an Azure Functions instance under Consumption Plan, its inbound IP address is not static. In other words, the inbound IP address of an Azure Functions instance will be changing at any time, without prior notice. Actually, due to the serverless nature, we don’t need to worry about the IP address change. If you see the instance details, it has more than one inbound IP address assignable.


 



 


Therefore, if you map a custom APEX domain to your Azure Function instance, your APEX domain has to be mapped to an A record of your DNS. And whenever the inbound IP address changes, your DNS must update the A record as well.


 


A Record Update on Azure DNS


 


If you use Azure PowerShell, you can get the inbound IP address of your Azure Function app instance.


 


    $CertificateResourceGroupName = “[RESOURCE_GROUP_NAME_FOR_CERTIFICATE]”
$CertificateName = “[NAME_OF_CERTIFICATE]”

$endpoint = “https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Web/certificates/{2}” `
-f $subscriptionId, $CertificateResourceGroupName, $CertificateName


 


Then, check your DNS and find out the A record. Let’s assume that you use Azure DNS service as your DNS. As there can be multiple A records registered, You’ll take only the first one for now.


 


    $ZoneResourceGroupName = “[RESOURCE_GROUP_NAME_FOR_AZURE_DNS]”
$ZoneName = “[NAME_OF_AZURE_DNS_ZONE]”

$rs = Get-AzDnsRecordSet `
-ResourceGroupName $ZoneResourceGroupName `
-ZoneName $ZoneName `
-Name “@” `
-RecordType A
$oldIp4Address = $rs.Records[0].Ipv4Address


 


Let’s compare to each other. If the inbound IP and the A record are different, update the A record value of Azure DNS.


 


    if ($oldIp4Address -ne $newIp4Address) {
$rs.Records[0].Ipv4Address = $newIp4Address
$updated = Set-AzDnsRecordSet -RecordSet $rs
}

 


We’ve got the A record update done.


 


SSL Certificate Update on Azure Key Vault


 


If the A record has been updated, the existing SSL certificate is not valid any longer. Therefore, you should also update the SSL certificate. In my previous post, I used the SSL certificate update tool, and it provides the HTTP API endpoint to renew the certificate. Now, you can send the HTTP API request to the endpoint, through PowerShell.


 


    $ApiEndpoint = “[ACMEBOT_HTTP_API_ENDPOINT]”
$HostNames = “[COMMA_DELIMITED_HOST_NAMES]”

$dnsNames = $HostNames -split “,”
$body = @{ DnsNames = $dnsNames }

$issued = Invoke-RestMethod `
-Method Post `
-Uri $ApiEndpoint `
-ContentType “application/json” `
-Body ($body | ConvertTo-Json)


 


Now, you got the renewed SSL certificate by reflecting the updated A record.


 



 



 


SSL Certificate Sync on Azure Functions


 


You got the SSL certificate renewed, but your Azure Function instance hasn’t got the renewed certificate yet.


 



 


According to the doc, the renewed SSL certificate will be automatically synced in 48 hours. If you think it’s too long to take, use the following PowerShell script to sync the renewed certificate manually. First of all, get the access token from the login context. If you use the Service Principal, you can get the access token by filtering the client ID of your Service Principal.


 


    $tokenCachedItems = (Get-AzContext).TokenCache.ReadItems()
$tokenCachedItem = $tokenCachedItems | Where-Object { $_.ClientId -eq $clientId }
$accessToken = ConvertTo-SecureString -String $tokenCachedItem.AccessToken -AsPlainText -Force

 


Next, get the UNIX timestamp value in milliseconds.


 


    $epoch = ([DateTimeOffset](Get-Date)).ToUnixTimeMilliseconds()

 


Then, make up the HTTP API endpoint to the certificate. As you’ve already logged in with your Service Principal, you already know the $subscriptionId value.


 


    $CertificateResourceGroupName = “[RESOURCE_GROUP_NAME_FOR_CERTIFICATE]”
$CertificateName = “[NAME_OF_CERTIFICATE]”

$endpoint = “https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Web/certificates/{2}” `
-f $subscriptionId, $CertificateResourceGroupName, $CertificateName


 


Call the endpoint to get the existing certificate details via the GET method.


 


    $ApiVersion = “2018-11-01”

$cert = Invoke-RestMethod -Method GET `
-Uri (“{0}?api-version={1}&_={2}” -f $endpoint, $ApiVersion, $epoch) `
-ContentType “application/json” `
-Authentication Bearer `
-Token $accessToken


 


Call the same endpoint with the existing certificate details through the PUT method. Then, the renewed certificate is synced.


 


    $result = Invoke-RestMethod -Method PUT `
-Uri (“{0}?api-version={1}” -f $endpoint, $ApiVersion) `
-ContentType “application/json” `
-Authentication Bearer `
-Token $accessToken `
-Body ($cert | ConvertTo-Json)

 


The $result object contains the result of the sync process. Both $result.properties.thumbprint value and $cert.properties.thumbprint value MUST be different. Otherwise, it’s not synced yet. Once the sync process is over, you can find out the renewed thumbprint value on Azure Portal.


 



 


GitHub Actions Workflow for Automation


 


Now, we got three jobs for SSL certificate update. Let’s build each job as a GitHub Action. By the way, why do I need GitHub Actions for this automation?


 



  • GitHub Actions is not exactly the same, but it has the same nature of serverless – triggered by events and no need to set up the infrastructure.

  • Unlike other serverless services, GitHub Actions doesn’t need any infrastructure or instance setup or configuration because we only need a repository to run the GitHub Actions workflow.

  • GitHub Actions is free of charge, as long as your repository is public (or open-source).


 


As all GitHub Actions are running Azure PowerShell scripts, we can simply define the common Dockerfile.


    # Azure PowerShell base image
FROM mcr.microsoft.com/azure-powershell:latest

ADD entrypoint.ps1 /entrypoint.ps1
RUN chmod +x /entrypoint.ps1

ENTRYPOINT [“pwsh”, “-File”, “/entrypoint.ps1”]


 


The entrypoint.ps1 file of each Action makes use of the logic stated above.


 


A Record Update


 


This is the Action that updates A record on Azure DNS. It returns an output value indicating whether the A record has been updated or not. With this output value, we can decide whether to take further steps or not (line #27, 38).


 


    Param(
[string] [Parameter(Mandatory=$true)] $AppResourceGroupName,
[string] [Parameter(Mandatory=$true)] $AppName,
[string] [Parameter(Mandatory=$true)] $ZoneResourceGroupName,
[string] [Parameter(Mandatory=$true)] $ZoneName
)

$clientId = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).clientId
$clientSecret = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).clientSecret | ConvertTo-SecureString -AsPlainText -Force
$tenantId = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).tenantId

$credentials = New-Object System.Management.Automation.PSCredential($clientId, $clientSecret)

$connected = Connect-AzAccount -ServicePrincipal -Credential $credentials -Tenant $tenantId

# Add/Update A Record
$app = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName $AppResourceGroupName -ResourceName $AppName
$newIp4Address = $app.Properties.inboundIpAddress

$rs = Get-AzDnsRecordSet -ResourceGroupName $ZoneResourceGroupName -ZoneName $ZoneName -Name “@” -RecordType A
$oldIp4Address = $rs.Records[0].Ipv4Address

if ($oldIp4Address -eq $newIp4Address) {
Write-Output “No need to update A record”

# Set Output
Write-Output “::set-output name=updated::$false”

return
}

$rs.Records[0].Ipv4Address = $newIp4Address
$updated = Set-AzDnsRecordSet -RecordSet $rs

Write-Output “A record has been updated”

# Set Output
Write-Output “::set-output name=updated::$true”

return


 


SSL Certificate Update


 


This is the Action that updates the SSL certificate on Azure Key Vault. It also returns an output value indicating whether the update is successful or not (line #14).


 


    Param(
[string] [Parameter(Mandatory=$true)] $ApiEndpoint,
[string] [Parameter(Mandatory=$true)] $HostNames
)

# Issue new SSL certificate
$dnsNames = $HostNames -split “,”
$body = @{ DnsNames = $dnsNames }
$issued = Invoke-RestMethod -Method Post -Uri $ApiEndpoint -ContentType “application/json” -Body ($body | ConvertTo-Json)

Write-Output “New SSL certificate has been issued to $HostNames”

# Set Output
Write-Output “::set-output name=updated::$true”


 


SSL Certificate Sync


 


This is the Action that syncs the certificate on Azure Functions. It also returns an output indicating whether the sync is successful or not (line #44).


 


    Param(
[string] [Parameter(Mandatory=$true)] $CertificateResourceGroupName,
[string] [Parameter(Mandatory=$true)] $CertificateName,
[string] [Parameter(Mandatory=$false)] $ApiVersion = “2018-11-01”
)

$clientId = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).clientId
$clientSecret = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).clientSecret | ConvertTo-SecureString -AsPlainText -Force
$tenantId = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).tenantId
$subscriptionId = ($env:AZURE_CREDENTIALS | ConvertFrom-Json).subscriptionId

$credentials = New-Object System.Management.Automation.PSCredential($clientId, $clientSecret)

$connected = Connect-AzAccount -ServicePrincipal -Credential $credentials -Tenant $tenantId

# Get Access Token
$tokenCachedItems = (Get-AzContext).TokenCache.ReadItems()
$tokenCachedItem = $tokenCachedItems | Where-Object { $_.ClientId -eq $clientId }
$accessToken = ConvertTo-SecureString -String $tokenCachedItem.AccessToken -AsPlainText -Force

# Get Existing Certificate Details
$epoch = ([DateTimeOffset](Get-Date)).ToUnixTimeMilliseconds()
$endpoint = “https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Web/certificates/{2}” -f $subscriptionId, $CertificateResourceGroupName, $CertificateName

$cert = Invoke-RestMethod -Method GET `
-Uri (“{0}?api-version={1}&_={2}” -f $endpoint, $ApiVersion, $epoch) `
-ContentType “application/json” `
-Authentication Bearer `
-Token $accessToken

$certJson = $cert | ConvertTo-Json

# Sync Certificate
$result = Invoke-RestMethod -Method PUT `
-Uri (“{0}?api-version={1}” -f $endpoint, $ApiVersion) `
-ContentType “application/json” `
-Authentication Bearer `
-Token $accessToken `
-Body $certJson

$updated = $cert.properties.thumbprint -ne $result.properties.thumbprint

# Set Output
Write-Output “::set-output name=updated::$updated”


 


GitHub Actions Workflow


 


The ideal way to trigger the GitHub Actions workflow should be the event – when an inbound IP address changes on Azure Function instance, it should raise an event that triggers the GitHub Actions workflow. Unfortunately, at the time of this writing, there is no such event from Azure App Service. Therefore, you should use the scheduler instead. With the timer event of GitHub Actions, you can regularly check the inbound IP address change.


 



  • As the scheduler is the main event trigger, set up the CRON scheduler (line #4-5). Here in the sample code, I run the scheduler for every 30 mins.

  • As I use all the actions privately, not publicly, whenever the scheduler is triggered, checkout the code first (line #14-15).

  • Update the A record of Azure DNS.

  • Depending on the result of the A record update (line #29), it updates the SSL certificate.

  • Depending on the result of the SSL certificate renewal (line #37), it syncs the SSL certificate with Azure Functions instance.

  • Depending on the result of the SSL certificate sync (line #47), it sends a notification email to administrators.


 


    name: Update DNS & SSL Certificate

on:
schedule:
– cron: ‘0/30 * * * *’

jobs:
update_dns_and_ssl_certificate:
name: ‘PROD: Update DNS and SSL Certificate’

runs-on: ubuntu-latest

steps:
– name: Checkout the repo
uses: actions/checkout@v2

– name: Update A record
id: arecord
uses: ./actions/dns-update
env:
AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS }}
with:
appServiceResourceGroup: ${{ secrets.RESOURCE_GROUP_NAME_APP }}
appName: ${{ secrets.RESOURCE_NAME_FUNCTIONAPP }}
dnsZoneResourceGroup: ${{ secrets.RESOURCE_GROUP_NAME_ZONE }}
dnsZoneName: ${{ secrets.RESOURCE_NAME_ZONE }}

– name: Update SSL Certificate
if: steps.arecord.outputs.updated == ‘true’
id: certificate
uses: ./actions/ssl-update
with:
apiEndpoint: ${{ secrets.SSL_RENEW_ENDPOINT }}
hostNames: ${{ secrets.SSL_HOST_NAMES }}

– name: Sync SSL Certificate
if: steps.certificate.outputs.updated == ‘true’
id: sync
uses: ./actions/ssl-update
env:
AZURE_CREDENTIALS: ${{ secrets.AZURE_CREDENTIALS }}
with:
certificateResourceGroup: ${{ secrets.RESOURCE_GROUP_NAME_CERTIFICATE }}
certificateName: ${{ secrets.RESOURCE_NAME_CERTIFICATE }}

– name: Send Email Notification
if: steps.sync.outputs.updated == ‘true’
uses: dawidd6/action-send-mail@v2
with:
server_address: ${{ secrets.MAIL_SMTP_SERVER }}
server_port: ${{ secrets.MAIL_SMTP_PORT }}
username: ${{ secrets.MAIL_SMTP_USERNAME }}
password: ${{ secrets.MAIL_SMTP_PASSWORD }}

subject: ‘[${{ secrets.SSL_HOST_NAMES }}] SSL Certificate Updated’
body: ‘SSL certificate for ${{ secrets.SSL_HOST_NAMES }} has been updated’
to: ${{ secrets.MAIL_RECIPIENTS }}
from: ${{ secrets.MAIL_SENDER }}


 


After the workflow runs, we can see the result like below:


 



 


This is the notification email.


 



 


If the A record is up-to-date, the workflow stops there and doesn’t take the further steps.


 



 




 


So far, we use GitHub Actions workflow to regularly check the inbound IP address of the Azure Functions instance and update the change to Azure DNS, renew an SSL certificate, and sync the certificate with Azure Functions instance. In the next post, I’ll discuss how to deploy the Azure Functions app through GitHub Actions without having to know the publish profile.


 


This article was originally published on Dev Kimchi.

Experiencing Latency and Data Loss issue in Azure Portal for Many Data Types – 11/01 – Investigating

This article is contributed. See the original author and article here.

Update: Sunday, 01 November 2020 22:10 UTC

We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers continue to experience Data loss and Data Latency for resources hosted in region Central US. We are working to establish the start time for the issue, initial findings indicate that the problem began at 11/01 ~8:35PM UTC. We currently have no estimate for resolution.
  • Work Around: NA
  • Next Update: Before 11/02 00:30 UTC
-Arish B

Azure Stack Hub Partner Solutions Series – telkomtelstra

This article is contributed. See the original author and article here.

Today, I want you to introduce you to Azure Stack Hub Partner telkomtelstra. telkomtelstra is a Service Provider working with Enterprise and Government customers across Indonesia. They are a trusted advisor for their customers and support them across a range of services. Join us in this episode as we explore their journey with Azure Stack Hub and learn more about the wide range of services they provide for their customers.


 


 


Together with the Tiberiu Radu (Azure Stack Hub PM @rctibi), we created a new Azure Stack Hub Partner solution video series to show how our customers and partners use Azure Stack Hub in their Hybrid Cloud environment. In this series, as we will meet customers that are deploying Azure Stack Hub for their own internal departments, partners that run managed services on behalf of their customers, and a wide range of in-between as we look at how our various partners are using Azure Stack Hub to bring the power of the cloud on-premises.


 


 


 


 


Links mentioned through the video:



 


I hope this video was helpful and you enjoyed watching it. If you have any questions, feel free to leave a comment below. If you want to learn more about the Microsoft Azure Stack portfolio, check out my blog post.

Creating index becomes extremely slow when all rows are in one partition

Creating index becomes extremely slow when all rows are in one partition

This article is contributed. See the original author and article here.

A customer reported that they found creating indexes sometimes become very slow in SQL2017. We analyzed this issue and found below symptom



  1. This issue happens when creating index on partition table. But all rows are in one partition.

  2. This issue happens when the database compatibility level is 140. When we change database compatibility level to 100, issue will disappear.


 


It seems it’s CE issue. We need to check execution plan. However, we are not able to get execution plan for ‘create index’ query in SSMS directly. Alternatively, we found below methods to get an ongoing actual execution plan.


 


1)  Choose ‘Include Actual Execution Plan’. Get session id =56


Bob_Cai_1-1604124635180.png


 


 


2) On another session , run this query every minutes to get ongoing actual execution plan


 


SELECT * FROM sys.dm_exec_query_statistics_xml(56);


 


 


New CE —  under 140 compatibility level


=====================================


This table has 100 partitions, but all rows are in one partition. We can see this table has 216213923 rows.


 


Bob_Cai_2-1604124635201.png


 


 


Then we got ongoing actual execution plan. We found ‘Actual Number of Rows’ were more than the total number of rows of entire table.


 


Bob_Cai_3-1604124635214.png


 


We captured Xevent trace as well. It seems SQL SERVER sort 216213923 rows again and again.  we guess the new CE did the sort 100 times for the entire table.


 


Bob_Cai_4-1604124635260.png


 


We checked source codes. We found new CE use a new function CSelCalcHistogramComparison to calculate partition selectivity. Since all rows are in one partition in our case, the selectivity was calculated to 1. Therefore it failed to push down the partition ID predicate to index scan. So it executed 100 times full table scan and sort.


 


Microsoft has noticed this issue and has fixed it. We can enable trace flag  -T4199 to fix this issue.


 

Release: SQL Server Migration Assistant (SSMA) v8.15

This article is contributed. See the original author and article here.

Overview


SQL Server Migration Assistant (SSMA) Access, DB2, MySQL, Oracle, and SAP ASE (formerly SAP Sybase ASE) allows users to convert a database schema to a Microsoft SQL Server schema, deploy the schema, and then migrate data to the target SQL Server (see below for supported versions).


 


 


What’s new?


The latest release of SSMA enhances each “branch” of the tool with improved naming for statements loaded from files, revamped assessments reports compatible with modern browsers and an updated Azure AD authentication mechanism which uses the authority provided by the Azure SQL database.


 


In addition, this release includes the following:



  • SSMA for Access now ignores auto-created indexes for foreign keys

  • SSMA for DB2 has been enhanced with:


    • A fix for a bug related to conversion of MIN/MAX aggregate functions with date/time arguments

    • A fix for a bug  in VARCHAR_FORMAT emulation function when DD placeholder is used

    • Improve type mappings for TIME data type

    • Improve conversion of ROUND and TRUNC functions with numeric arguments


  • SSMA for SAP ASE now allows you to hide system tables and views (exclude them from conversion).

  • SSMA for Oracle now adds a setting to use full type specification for %type and %rowtype attributes


 


Downloads



 


Supported sources and target versions


Source: For the list of supported sources, please review the information on the Download Center for each of the above SQL Server Migration Assistant downloads.


Target: SQL Server 2012, SQL Server 2014, SQL Server 2016, SQL Server 2017, SQL Server 2019, Azure SQL Database, an Azure SQL Database managed instance, and  Azure SQL Data Warehouse (Azure Synapse Analytics)*.


*Azure SQL Data Warehouse (Azure Synapse Analytics) is supported as a target only when using SSMA for Oracle.


 


Resources


SQL Server Migration Assistant documentation