This article is contributed. See the original author and article here.
How to set up new password for the cluster certificate to connect to Service Fabric Cluster in the VSTS Pipeline
This article helps you to set up new password for the cluster certificate which one can use in release pipeline to deploy your application to SF cluster.
Scenario : Adding the base-64 encoding of the client certificate file which is NOT PASSWORD protected when setting up the “New Service Fabric Connection” in the release pipeline will lead to deployment failure.
Below is the sample of the error:
“2020-10-15T20:58:45.3232533Z ##[debug]System.Management.Automation.RuntimeException: An error occurred attempting to import the certificate. Ensure that your service endpoint is configured properly with a correct certificate value and, if the certificate is password-protected, a valid password. Error message: Exception calling ‘Import’ with ‘3’ argument(s): ‘The specified network password is not correct.”
Steps to set new password for Cluster certificate:
Download the relevant cluster certificate from the Key vault to local machine.
Install the certificate to local machine store with marking key as exportable.
To set up new password, follow below PowerShell Script:
a. # Retrieve the Certificate object from the certificate store $SelfSignedCert = Get-ChildItem Cert:LocalMachineMy -DnsName “<clustername>.<clusteregion>.cloudapp.azure.com”
Note: Now the Client/ Cluster certificate is password protected, one can convert into base-64 encode(Step 4) to use in the Release pipeline
Convert the certificate into base-64 encoded representation of the certificate using PowerShell.
This article is contributed. See the original author and article here.
Background/Scenario
Azure Alerts can be used to proactively notify you when important conditions are found in your monitoring data. After setting up either metric alerts or log alerts for your workloads, specifically IaaS workloads, there may be times when you need to disable those alerts during a maintenance window.
Depending on the size of your environment and the number of alerts you’ve created, it might be quite a chore to go through each one to disable/enable.
The following will demonstrate how to setup an Azure Automation Runbook to quickly set the status of our IaaS Alerts to either Enabled or Disabled via a webhook. The webhook will allow us to execute the Azure Automation Runbook from anywhere, like an on-premises workstation, to set the alert status. The runbook will also take advantage of Azure Resource Graph as a mechanism to search for alerts across all of the available subscriptions.
Grant the run as account, at a minimum, the ability to manage Alerts. By default, the AAA run as account is granted contributor rights at the subscription it’s deployed into. In production, granting access to the AAA run as account at a Management Group is recommended.
Step 2b: Create an Automation Account – Manually Method
Grant the AAA run as account, at a minimum, the ability to manage Alerts. By default, the AAA run as account is granted contributor rights at the subscription it’s deployed into. In production, granting access to the AAA run as account at a Management Group is recommended.
Import PowerShell Gallery modules (Az.Accounts, Az.Monitor, Az.ResourceGraph) into the AAA
Under Shared Resources, select Modules.
Select Browse gallery, and then search the Gallery for a module.
From the Runbooks page in the Azure portal, click the runbook that the webhook starts to view the runbook details. Ensure that the runbook Status field is set to Published.
Click Webhook at the top of the page to open the Add Webhook page.
Click Create new webhook to open the Create Webhook page.
Fill in the Name and Expiration Date fields for the webhook and specify if it should be enabled. See Webhook properties for more information about these properties.
Click the copy icon or press Ctrl+C to copy the URL of the webhook. Then record it in a safe place.
Please save your webhook URL. Once you create the webhook, you cannot retrieve the URL again.
Click Parameters, leave it blank, press OK.
Click Create to create the webhook.
Step 4: Test your Automation Account Runbook via webhook
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
If Azure Event Grid is the only system which consumes and posts cloud events in your environment, Azure Event Grid SDK should be chosen. However, if several systems which consume and post cloud events have already existed in your environment and you plan to introduce Azure Event Grid, you would look for ways to interact with Azure Event Grid using industry standard APIs. In this article, I describe how to interact with Azure Event Grid using CloudEvents APIs.
Prerequisites and basic information
What is CloudEvents?
If you are not familiar with CloudEvents, please check the following URL.
As of now, Azure Event Grid supports Structured Content mode only (Binary Content mode is not supported). We have to follow JSON Event Format specification in case of creating events.
CloudEvents SDKs are provided in several languages. In this article, sample applications are created with Java APIs for CloudEvents. Json EventFormat implementation with Jackson and HTTP Protocol Binding APIs for Jakarta RESTful Web Services allows us to create applications easier than using core APIs.
According to this document, we can post events to Azure Event Grid topic with the following URL (Access key or Shared Access Signature is required). We can get the access key in Azure Portal and via Azure CLI.
https://{topic-endpoint}?api-version=2018-01-01
Shared Access Signature is similar to access access key, but it can be configured with an expiration time. It might be suitable if access restriction to a topic or domain is required. The following URL describes how to create Shared Access Signature.
In this part, a REST client application which uses CloudEvents APIs is created in order to post events to Azure Event Grid topic. Azure Event Grid Viewer application verifies and shows these events. This viewer application is described in the following URL.
CloudEvents::v1() method allows us to create events. JSON is used as a format of custom data, and we use the withDataContentType() method to specify application/json as Content-Type.
JsonObject jsonObject = Json.createObjectBuilder()
.add("message", "Using CloudEvents.io API to send CloudEvents!!")
.build();
CloudEvent ce = CloudEventBuilder.v1()
.withId("A234-1234-1234")
.withType("io.logico-jp.ExampleEventType")
.withSource(URI.create("io/logico-jp/source"))
.withTime(OffsetDateTime.now(ZoneId.ofOffset("UTC", ZoneOffset.UTC)))
.withDataContentType(MediaType.APPLICATION_JSON)
.withData(jsonObject.toString().getBytes(StandardCharsets.UTF_8))
.build();
Serialization
Serialization of created JSON formatted events is required. To do so, we use “Json EventFormat implementation with Jackson” APIs. Steps for creating a client application are completed.
EventFormat format =EventFormatProvider
.getInstance()
.resolveFormat(JsonFormat.CONTENT_TYPE);
byte[] serialized = format.serialize(ce);
Create REST Client
We can follow typical ways to creating REST client. No special configuration is required. Access key of Event Grid should be set to HTTP Header. Note that not application/json but application/cloudevents+json should be set as Content-Type.
In this part, a JAX-RS application is created to subscribe the Event Grid topic. As Azure Event Grid send events using webhook, the JAX-RS application requires a POST endpoint to listen events.
Event Grid topic which we use is already configured in the previous section (precisely, the Event Grid topic should have been already configured).
Dependencies
In this case, Helidon MP is chosen to create JAX-RS application. Needless to say, you can choose any development framework freely.
We can create a JAX-RS applications without special configuration. As Azure Event Grid supports Structured Content mode only, event format is JSON. So, the sample application waits for events using JsonObject. AndEventFormat::deserialize() method is used for deserialization of events.
@Path("/updates")
@POST
public Response receiveEvent(Optional<JsonObject> obj) {
if(obj.isEmpty()) return Response.noContent().status(Response.Status.OK).build();
EventFormat format = EventFormatProvider
.getInstance()
.resolveFormat(JsonFormat.CONTENT_TYPE);
CloudEvent ce = format.deserialize(obj.get().toString().getBytes(StandardCharsets.UTF_8));
JsonObject customData = JsonUtil.toJson(new String(ce.getData())).asJsonObject();
// output to console
System.out.println("Received JSON String -- " + obj.get().toString());
System.out.println("Converted to CloudEvent -- " + ce.toString());
System.out.println("Data in CloudEvent -- " + customData.toString());
return Response.noContent().status(Response.Status.ACCEPTED).build();
}
Configure OPTIONS method for enabling webhook
When configuring Azure Event Grid integration through webhook, subscriber (i.e. this JAX-RS application) has to respond to Azure Event Grid using OPTIONS method.
We can observe each event was successful delivered to each subscription in Azure Portal.
Azure Event Grid Viewer also shows delivered events.
And from JAX-RS application side, we can observe each delivered event in App Service console log. Three logs appears per each event.
Conclusion
CloudEvents APIs allow us to post structured events to Azure Event Grid, and to handle structured events delivered from Azure Event Grid. CloudEvents APIs support various languages, and especially Java, if you are familiar with JAX-RS and Jackson, you would easily create applications with these APIs.
If Azure Event Grid were the only system which consumes and posts cloud events in your environment, Azure Event Grid SDK would be the best APIs. However, if Azure Event Grid is one of services which consume and post cloud events, industry standard APIs is often more suitable than Azure Event Grid SDK.
This article is contributed. See the original author and article here.
We are pleased to announce the release of the Project and Roadmap apps in Microsoft Teams. Connecting directly to Project from within Teams has been one of the major requests from Project users, and these apps will make it easy to manage, track, and collaborate on all aspects of a team’s project in one place. This brings content and conversation side-by-side in one integrated experience.
Team members can create new projects or roadmaps, or open existing ones, in Microsoft Teams and keep communications within the context of work and collaboration within Office 365. The Project and Roadmap apps can be added as tabs in any channel by selecting the “+” icon at the top of a channel. Anyone who has access to that channel can also access that tab.
Microsoft Teams ♥Microsoft Project
Today, each one of us has become a project manager. To stay on top of the ever-shifting requirements of our jobs, we need tools that are simple yet robust enough to support any requirement, flexible enough to support any project type, and, most importantly, easy enough to collaborate with anyone no matter where they are or what device they are using.
The Project app in Teams helps you tackle anything from small projects to large initiatives and is designed for just about any role, skill level, or project type. You can access the features and capabilities, of the Project for the web experience such as the automated scheduling engine to set effort, duration, and resources from inside Teams.
Microsoft Teams ♥ Roadmap
If your group runs multiple projects at the same time and needs visibility across all the work being done, Roadmap provides a visual and interactive way to connect these projects and show their status in a transparent way across the organization.
The Roadmap – Microsoft Project app will give you a cross-functional, big picture view of the work that is most important to you. You can create a consolidated timeline view of projects from Microsoft Project and Azure Boards and plan larger initiatives across all of them – complete with key dates and milestones – so that all the work is visible.
Note: All Office 365 users will be able to view Projects/Roadmaps shared within Teams in a read-only mode. Users with appropriate Project for the Web licenses to create and edit Projects/Roadmaps will be able to do the same from within Teams as well. Learn more about Project for the Web licenses here.
If you want to learn more, see Use Project or Roadmap in Microsoft Teams. Next, notifications in Teams will be added so that users can see what’s important to them within Project and Roadmap in their team’s activity feed.
We love hearing from you. Please tell us how we can improve your Project experience in Teams through our UserVoice site. You can also leave a comment below to engage with us directly to provide feedback.
Keep checking our Tech Community site for the latest feature releases and Project news.
This article is contributed. See the original author and article here.
ADF has added the ability to now cache your data streams to a sink that writes to a cache instead of a data store, allowing you to implement what ETL tools typically refer to as Cached Lookups or Unconnected Lookups.
The ADF Data Flow Lookup Transformation performs a left outer join with a series of options to handle multiple matches and tags rows as lookup found / no lookup found. What the cached lookup enables is a mechanism to store those lookup streams in caches and access them from your expressions.
Many powerful use cases are enabled with this new ADF feature where you can now lookup reference data that is stored in cache and referenced via key lookups with different values, multiple times, without the need to specify separate Lookup transformation calls. Now you can simply use a lookup() function to grab additional specific columns as in: lookup().myColumn1.
Additionally, you can use the new function outputs() to grab an entire matrix of rows and columns from cache and iterate through an array of rows, picking your specific columns to reference.
This article is contributed. See the original author and article here.
Throughout this series, I’m going to show how an Azure Functions instance can map APEX domains, add an SSL certificate and update its public inbound IP address to DNS.
GitHub Actions, DNS & SSL Certificate on Azure Functions
Deploying Azure Functions via GitHub Actions without Publish Profile
In my previous post, we walked through how to link an SSL certificate issued by Let’s Encrypt, with a custom APEX domain. Throughout this post, I’m going to discuss how to automatically update the A record of a DNS server when the inbound IP address of the Azure Functions instance is changed, update the SSL certificate through the GitHub Actions workflow.
All the GitHub Actions source codes used in this post can be found at this repository.
Azure Functions Inbound IP Address
If you use an Azure Functions instance under Consumption Plan, its inbound IP address is not static. In other words, the inbound IP address of an Azure Functions instance will be changing at any time, without prior notice. Actually, due to the serverless nature, we don’t need to worry about the IP address change. If you see the instance details, it has more than one inbound IP address assignable.
Therefore, if you map a custom APEX domain to your Azure Function instance, your APEX domain has to be mapped to an A record of your DNS. And whenever the inbound IP address changes, your DNS must update the A record as well.
A Record Update on Azure DNS
If you use Azure PowerShell, you can get the inbound IP address of your Azure Function app instance.
Then, check your DNS and find out the A record. Let’s assume that you use Azure DNS service as your DNS. As there can be multiple A records registered, You’ll take only the first one for now.
If the A record has been updated, the existing SSL certificate is not valid any longer. Therefore, you should also update the SSL certificate. In my previous post, I used the SSL certificate update tool, and it provides the HTTP API endpoint to renew the certificate. Now, you can send the HTTP API request to the endpoint, through PowerShell.
Now, you got the renewed SSL certificate by reflecting the updated A record.
SSL Certificate Sync on Azure Functions
You got the SSL certificate renewed, but your Azure Function instance hasn’t got the renewed certificate yet.
According to the doc, the renewed SSL certificate will be automatically synced in 48 hours. If you think it’s too long to take, use the following PowerShell script to sync the renewed certificate manually. First of all, get the access token from the login context. If you use the Service Principal, you can get the access token by filtering the client ID of your Service Principal.
Then, make up the HTTP API endpoint to the certificate. As you’ve already logged in with your Service Principal, you already know the $subscriptionId value.
The $result object contains the result of the sync process. Both $result.properties.thumbprint value and $cert.properties.thumbprint value MUST be different. Otherwise, it’s not synced yet. Once the sync process is over, you can find out the renewed thumbprint value on Azure Portal.
GitHub Actions Workflow for Automation
Now, we got three jobs for SSL certificate update. Let’s build each job as a GitHub Action. By the way, why do I need GitHub Actions for this automation?
GitHub Actions is not exactly the same, but it has the same nature of serverless – triggered by events and no need to set up the infrastructure.
Unlike other serverless services, GitHub Actions doesn’t need any infrastructure or instance setup or configuration because we only need a repository to run the GitHub Actions workflow.
GitHub Actions is free of charge, as long as your repository is public (or open-source).
As all GitHub Actions are running Azure PowerShell scripts, we can simply define the common Dockerfile.
# Azure PowerShell base image
FROM mcr.microsoft.com/azure-powershell:latest
ADD entrypoint.ps1 /entrypoint.ps1
RUN chmod +x /entrypoint.ps1
ENTRYPOINT [“pwsh”, “-File”, “/entrypoint.ps1”]
The entrypoint.ps1 file of each Action makes use of the logic stated above.
A Record Update
This is the Action that updates A record on Azure DNS. It returns an output value indicating whether the A record has been updated or not. With this output value, we can decide whether to take further steps or not (line #27, 38).
# Set Output
Write-Output “::set-output name=updated::$true”
return
SSL Certificate Update
This is the Action that updates the SSL certificate on Azure Key Vault. It also returns an output value indicating whether the update is successful or not (line #14).
Write-Output “New SSL certificate has been issued to $HostNames”
# Set Output
Write-Output “::set-output name=updated::$true”
SSL Certificate Sync
This is the Action that syncs the certificate on Azure Functions. It also returns an output indicating whether the sync is successful or not (line #44).
# Set Output
Write-Output “::set-output name=updated::$updated”
GitHub Actions Workflow
The ideal way to trigger the GitHub Actions workflow should be the event – when an inbound IP address changes on Azure Function instance, it should raise an event that triggers the GitHub Actions workflow. Unfortunately, at the time of this writing, there is no such event from Azure App Service. Therefore, you should use the scheduler instead. With the timer event of GitHub Actions, you can regularly check the inbound IP address change.
As the scheduler is the main event trigger, set up the CRON scheduler (line #4-5). Here in the sample code, I run the scheduler for every 30 mins.
As I use all the actions privately, not publicly, whenever the scheduler is triggered, checkout the code first (line #14-15).
Update the A record of Azure DNS.
Depending on the result of the A record update (line #29), it updates the SSL certificate.
Depending on the result of the SSL certificate renewal (line #37), it syncs the SSL certificate with Azure Functions instance.
Depending on the result of the SSL certificate sync (line #47), it sends a notification email to administrators.
name: Update DNS & SSL Certificate
on:
schedule:
– cron: ‘0/30 * * * *’
jobs:
update_dns_and_ssl_certificate:
name: ‘PROD: Update DNS and SSL Certificate’
runs-on: ubuntu-latest
steps:
– name: Checkout the repo
uses: actions/checkout@v2
subject: ‘[${{ secrets.SSL_HOST_NAMES }}] SSL Certificate Updated’
body: ‘SSL certificate for ${{ secrets.SSL_HOST_NAMES }} has been updated’
to: ${{ secrets.MAIL_RECIPIENTS }}
from: ${{ secrets.MAIL_SENDER }}
After the workflow runs, we can see the result like below:
This is the notification email.
If the A record is up-to-date, the workflow stops there and doesn’t take the further steps.
So far, we use GitHub Actions workflow to regularly check the inbound IP address of the Azure Functions instance and update the change to Azure DNS, renew an SSL certificate, and sync the certificate with Azure Functions instance. In the next post, I’ll discuss how to deploy the Azure Functions app through GitHub Actions without having to know the publish profile.
This article was originally published on Dev Kimchi.
This article is contributed. See the original author and article here.
Update: Sunday, 01 November 2020 22:10 UTC
We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers continue to experience Data loss and Data Latency for resources hosted in region Central US. We are working to establish the start time for the issue, initial findings indicate that the problem began at 11/01 ~8:35PM UTC. We currently have no estimate for resolution.
This article is contributed. See the original author and article here.
Today, I want you to introduce you to Azure Stack Hub Partner telkomtelstra. telkomtelstra is a Service Provider working with Enterprise and Government customers across Indonesia. They are a trusted advisor for their customers and support them across a range of services. Join us in this episode as we explore their journey with Azure Stack Hub and learn more about the wide range of services they provide for their customers.
Together with the Tiberiu Radu (Azure Stack Hub PM @rctibi), we created a new Azure Stack Hub Partner solution video series to show how our customers and partners use Azure Stack Hub in their Hybrid Cloud environment. In this series, as we will meet customers that are deploying Azure Stack Hub for their own internal departments, partners that run managed services on behalf of their customers, and a wide range of in-between as we look at how our various partners are using Azure Stack Hub to bring the power of the cloud on-premises.
I hope this video was helpful and you enjoyed watching it. If you have any questions, feel free to leave a comment below. If you want to learn more about the Microsoft Azure Stack portfolio, check out my blog post.
This article is contributed. See the original author and article here.
A customer reported that they found creating indexes sometimes become very slow in SQL2017. We analyzed this issue and found below symptom
This issue happens when creating index on partition table. But all rows are in one partition.
This issue happens when the database compatibility level is 140. When we change database compatibility level to 100, issue will disappear.
It seems it’s CE issue. We need to check execution plan. However, we are not able to get execution plan for ‘create index’ query in SSMS directly. Alternatively, we found below methods to get an ongoing actual execution plan.
1) Choose ‘Include Actual Execution Plan’. Get session id =56
2) On another session , run this query every minutes to get ongoing actual execution plan
SELECT * FROM sys.dm_exec_query_statistics_xml(56);
New CE — under 140 compatibility level
=====================================
This table has 100 partitions, but all rows are in one partition. We can see this table has 216213923 rows.
Then we got ongoing actual execution plan. We found ‘Actual Number of Rows’ were more than the total number of rows of entire table.
We captured Xevent trace as well. It seems SQL SERVER sort 216213923 rows again and again. we guess the new CE did the sort 100 times for the entire table.
We checked source codes. We found new CE use a new function CSelCalcHistogramComparison to calculate partition selectivity. Since all rows are in one partition in our case, the selectivity was calculated to 1. Therefore it failed to push down the partition ID predicate to index scan. So it executed 100 times full table scan and sort.
Microsoft has noticed this issue and has fixed it. We can enable trace flag -T4199 to fix this issue.
This article is contributed. See the original author and article here.
Overview
SQL Server Migration Assistant (SSMA) Access, DB2, MySQL, Oracle, and SAP ASE (formerly SAP Sybase ASE) allows users to convert a database schema to a Microsoft SQL Server schema, deploy the schema, and then migrate data to the target SQL Server (see below for supported versions).
What’s new?
The latest release of SSMA enhances each “branch” of the tool with improved naming for statements loaded from files, revamped assessments reports compatible with modern browsers and an updated Azure AD authentication mechanism which uses the authority provided by the Azure SQL database.
In addition, this release includes the following:
SSMA for Access now ignores auto-created indexes for foreign keys
SSMA for DB2 has been enhanced with:
A fix for a bug related to conversion of MIN/MAX aggregate functions with date/time arguments
A fix for a bug in VARCHAR_FORMAT emulation function when DD placeholder is used
Improve type mappings for TIME data type
Improve conversion of ROUND and TRUNC functions with numeric arguments
SSMA for SAP ASE now allows you to hide system tables and views (exclude them from conversion).
SSMA for Oracle now adds a setting to use full type specification for %type and %rowtype attributes
Source: For the list of supported sources, please review the information on the Download Center for each of the above SQL Server Migration Assistant downloads.
Target: SQL Server 2012, SQL Server 2014, SQL Server 2016, SQL Server 2017, SQL Server 2019, Azure SQL Database, an Azure SQL Database managed instance, and Azure SQL Data Warehouse (Azure Synapse Analytics)*.
*Azure SQL Data Warehouse (Azure Synapse Analytics) is supported as a target only when using SSMA for Oracle.
Recent Comments