by Contributed | Apr 29, 2022 | Technology
This article is contributed. See the original author and article here.
The purpose of this series of articles is to describe some of the details of how High Availability works and how it is implemented in Azure SQL Managed Instance in both Service Tiers – General Purpose and Business Critical.
In this post, we shall introduce some of the high availability concepts and then dive into the details of the General Purpose service tier.
Introduction to High Availability
The goal of a high-availability solution is to mask the effects of a hardware or software failure and to maintain database availability so that the perceived downtime for users is minimized. In other words, high availability is about putting a set of technologies into place before a failure occurs to prevent the failure from affecting the availability of data.
The two main requirements around high availability are commonly known as RTO and RPO.
RTO – stands for Recovery Time Objective and is the maximum allowable downtime when a failure occurs. In other words, how much time it takes for your databases to be up and running.
RPO – stands for Recovery Point Objective and is the maximum allowable data-loss when a failure occurs. Of course, the ideal scenario is not to lose any data, but a more realistic (and also ideal) scenario is to not lose any committed data, also known as Zero Committed Data Loss.
In SQL Managed Instance the objective of the high availability architecture is to guarantee that your database is up and running 99.99% of the time (financially backed up by an SLA) minimizing the impact of maintenance operations (such as patching, upgrades, etc.) and outages (such as underlying hardware, software, or network failures) that might occur.
High Availability in the General Purpose service tier
General Purpose service tier uses what is called the Standard Availability model. This architecture model is based on a separation of compute and storage. It relies on the high availability and reliability provided by the remote storage tier. This architecture is more suitable for budget-oriented business applications that can tolerate some performance degradation during maintenance activities.
The Standard Availability model includes two layers:
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as tempdb database, that resides on the attached SSD Disk, and memory structures such as the plan cache, the buffer pool, and columnstore pool that resides on memory.
It also contains a stateful data layer where the user database data & log files reside in an Azure Blob storage. This type of repository has built-in data availability and redundancy features (Local Redundant Storage or LRS). It guarantees that every record in the log file or page in the data file will be preserved even if sqlservr.exe process crashes.

The behavior of this architecture is similar to an SQL Server FCI (SQL Server Failover Cluster Instance) but without all the complexity that we currently have on-premises or in an Azure SQL VM. In that scenario we would need to first create and configure a WSFC (Windows Server Failover Cluster) and then create an SQL Server FCI (SQL Server Failover Cluster Instance). All of this is done behind the curtains for you when you provision an Azure SQL Managed Instance, so you don’t need to worry about it. As you can see from the diagram on the picture above, we have a shared storage functionality (again like in an SQL Server FCI), in this case in Azure premium storage, and also we have a stateless node, operated by Azure Service Fabric. The stateless node not only initializes sqlservr.exe process but also monitors & controls the health of the node and, if necessary, performs the failover to another node from a pool of spare nodes.
All the technical aspects and fine-tuning of a cluster (i.e. quorum, lease, votes, network issues, avoiding split-brain, etc.) are covered & managed transparently by the Azure Service Fabric. The specific details of Azure Service Fabric go beyond the scope of this article, but you can find more information in the article Disaster recovery in Azure Service Fabric.
From the point of view of an application connected to an Azure SQL Managed Instance, you don’t have the concept of Listener (like in an Availability Groups implementation) or Virtual Name (like in an SQL Server FCI) – you connect to an endpoint via a Gateway. This is also an additional advantage since the Gateway is in charge of “redirecting” the connection to the Primary Node or a new Node in case of a Failover, so you don’t have to worry about changing the connection string or anything like that. Again, this is the same functionality that the Virtual Name or Listener provides, but more transparently to you. Also, notice in the Diagram above that we have redundancy on the Gateways to provide an additional level of availability.
Below is a diagram of the connection architecture, in this case using the Proxy connection type, which is the default:

In the Proxy connection type, the TCP session is established using the Gateway and all subsequent packets flow through it.
Storage
Regarding Storage, we use the same concept of “Shared Storage” that is used in a FCI but with additional advantages. In a traditional FCI On-Prem the Storage becomes what is known as a “Single Point of Failure” meaning that if something happens with the Storage – your whole Cluster solution goes down. One of the possible ways customers could work around this problem is with “Block Replication” technologies of the Storage (SAN) Providers replicating this shared Storage to another Storage (typically between a long distance for DR purposes). In SQL Managed Instance we provide this redundancy, using Azure Premium Storage for Data and Log files, with Local Redundancy Storage (LRS) and also separating the Backup Files (following our Best Practices) in an Azure Standard Storage Account also making them redundant using RA-GRS (Read Access Geo Redundant Storage). To know more about redundancy of backups files take a look at the post on Configuring backup storage redundancy in Azure SQL.
For performance reasons, the tempdb database is kept local in an SSD where we provide 24 GB per each of the allocated CPU vCores.
The following diagram illustrates this storage architecture:

It is worth mentioning that Locally Redundant Storage (LRS) replicates your data three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.

To find out more about redundancy in Azure Storage please see the following article in Microsoft documentation – https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy.
Failover
The process of Failover is very straightforward and of course you can have either a “planned failover” – such as a user-initiated manual failover or a system-initiated failover taking place because of a database engine or operating system upgrade operation, and an “unplanned failover” taking place due to a failure detection (i.e. hardware, software, network failure, etc.).
Regarding an “unplanned” or an “unexpected” failover, when there are critical errors in the Azure SQL Managed Instance functioning, an API call is made to communicate the Azure Service Fabric that a Failover needs to happen. Of course, the same happens when other errors (like a faulty node) are detected. In this case, the Azure Service Fabric will move the stateless sqlservr.exe process to another stateless compute node with sufficient free capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly initialized sqlservr.exe process. After that a Recovery Process on the Databases is initiated. This process guarantees 99.99% availability, but a heavy workload may experience some performance degradation during the transition since the new sqlservr.exe process starts with cold cache.
Since a Failover can occur unexpectedly, customer might need to determine if such event took place and for that purpose customer can determine the timestamp of the last Failover with the help of T-SQL as described in the article How-to determine the timestamp of the last SQL MI failover from the SQL MI how-to series.
Also, you could see the Failover event listed in the Activity Log using the Azure Portal.
Below is a diagram of the failover process:

As you can see from the diagram, on the picture above, the Failover process will introduce a brief moment of unavailability while a new node from the Pool of spares nodes is allocated. In order to minimize the impact of a failover you would need to incorporate in your application a retry-logic. This is normally accomplished detecting the transient errors during a failover (4060, 40197, 40501, 40613, 49918, 49919, 49920, 11001) within a try-catch block of code, waiting a couple of seconds and then retrying the connection (re-connect). Alternatively, you could use the Microsoft.Data.SqlClient v3.0 Preview NuGet package in your application that have already incorporated a retry logic. To know more about this driver see the following article: Introducing Configurable Retry Logic in Microsoft.Data.SqlClient v3.0.0
Notice that that currently that only one failover call is allowed every 15 minutes.
In this article we have introduced the concepts of high availability and explained how it is implemented for the General Purpose service tier. In the second part of where we will cover High Availability in the Business Critical service tier.
by Contributed | Apr 28, 2022 | Technology
This article is contributed. See the original author and article here.
On November 10, 2020, we announced the first preview of Az.Tools.Predictor, a PowerShell module suggesting the Azure cmdlet to use with parameters and suggested values.
Today, we are announcing the general availability of Az.Tools.Predictor.
How it all started
During a study about a new module for Azure, I was surprised to see how difficult it was for the participant to find the correct cmdlet to use. Later, I was summarizing the learnings of the study, and though it would be great if we could have a solution that could help people finding the right cmdlet to use.
At the same time, we were starting to work on Predictive IntelliSense in PowerShell and after a couple of meetings with Jason Helmick, it became clear that this would be a great mechanism to address the challenge I had seen few days before by providing, in the command line, suggestions about cmdlet to use.
We quickly thought that some form of AI could help providing accurate recommendations so we involved Roshanak, Yevhen and Maoliang from our data science team to work with us on how we could build an engine that would provide recommendations for PowerShell cmdlets based on the user’s context.
Behind the scenes
Once a functional prototype was built, we wanted to confirm its usability before considering any public previews.
For our team usability is important, over time certain key combinations became a reflex and we knew that we had to fit in the existing memory muscle and become intuitive for PowerShell. For predictors to be successful, we organized several usability studies with prototypes of Az Predictors and addressed several improvements, like the color of the suggested text or the key combination to use to accept or navigate amongst predictions.
One of our initial prototypes was using the color scheme below, we wanted to have a clear color-based differentiation between typed characters and suggestions hoping this would help user navigate the suggestion. We worked with our design team to address the issue and evolve our design towards the current design.

We also evaluated if the information provided in the suggestions is helpful. Below is another of our early designs. By listening to our customers and observing how they are using the tool, we learned that showing cmdlets first then parameters and associated value samples was not as useful as showing the full line and not using more space in the terminal which is our current design.

During the last months we have done a few previews (read about preview 5) to stabilize the module as PowerShell and PS Readline which we depend on became stable. We have also improved our model based on the feedback we have collected and addressed issues reported.
Getting started
We would like to invite you to try the stable version of Az.Tools.Predictor.
To get started, follows these steps:
- Install or upgrade PowerShell v7.2
https://docs.microsoft.com/powershell/scripting/install/installing-powershell
- Install or upgrade PSReadline 2.2
Install-Module -Name PSReadLine -Force
- Install or upgrade Az.Tools.Predictor
Install-module -name Az.Tools.Predictor -Force
- Enable Az Predictor
Enable-AzPredictor -AllSession
Once installed, it is recommended that you restart your PowerShell sessions.
For more details, visit the Az Predictor documentation page: https://docs.microsoft.com/powershell/azure/az-predictor
Inline view mode (default)
Once enabled, the default view is the “inline view” as shown in the following screen capture:


This mode shows only one suggestion at a time. The suggestion can be accepted by pressing the right arrow or you can continue to type. The suggestion will dynamically adjust based on the text that you have typed.
You can accept the suggestion at any time then come back and edit the command that is on your prompt.
List view mode
This is my favorite mode!
Switch to this view either by using the “F2” function key on your keyboard or run the following command:
Set-PSReadLineOption -PredictionViewStyle ListView
This mode shows from your current prompt a list of possible matches for the command that you are typing. It combines suggestions from your local history and from Az Predictor.
Select a suggestion and then navigate through the parameter values with “Alt + A” to quickly fill replace the proposed values with yours.

Next steps
This is just the beginning of our journey to improving the usability of Azure PowerShell!
We will be carefully listening to every feedback that you send us:
We will share soon more about how we plan to expand this experience to other environments.
Credits
“It takes a village to raise a child” Az.Tools.Predictor is the result of the close collaboration of several teams distributed across continents and time zones working hard during the pandemic.
by Contributed | Apr 28, 2022 | Business, Microsoft 365, Technology
This article is contributed. See the original author and article here.
This month, we’re adding new capabilities to make everyone more comfortable in meetings, feel empowered in the diverse hybrid workplace, and be able to switch devices more easily.
The post From intelligent tools built on inclusivity to the latest in Windows—here’s what’s new in Microsoft 365 appeared first on Microsoft 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
by Contributed | Apr 27, 2022 | Technology
This article is contributed. See the original author and article here.
On March 15, Synaptiq and Microsoft issued a press release announcing a new Machine Vision pilot program for hospitals. In collaboration with Microsoft, Synaptiq built a demo solution to proactively inform care teams of potential Central Line dressing compliance issues.
The pilot program is designed to help reduce preventable injuries from hospital-borne Central Line-Associated Bloodstream Infections (CLABSIs) and improve speed of care and patient outcomes. It also helps providers standardize care for new and existing staff, identify education opportunities, and decrease documentation time.
According to the NIH, CLABSIs are largely preventable infections that occur in more than 400,000 patients annually in the United States alone, resulting in over 28,000 deaths and costing U.S. hospitals $2 billion. A key piece of preventing CLABSIs is maintaining Central Line dressings as clean and intact as possible.
Machine vision is a type of artificial intelligence (AI) that enables computers to derive information from visual inputs. It is able to collect more precise visual data than human vision ever could, and uses processing power to analyze the visual data faster and more thoroughly than the human mind.
Because visual cues play such a vital role in ensuring patient safety and preventing CLABSIs, machine vision has the potential to exponentially enhance care teams’ ability to recognize and respond to possible infections – before the human eye can even detect a problem is present.
I am truly excited to provide our Voices of Healthcare viewers with a first-look at this incredibly important pilot. I had the opportunity to assist in building the demo solution alongside Synaptiq and cannot wait to see how it helps save many, many lives in the years to come.
For this session on May 11, 2022, Synaptiq’s CEO Stephen Sklarew and Mariana Gattegno, Quality and Patient Safety consultant at Volpini Solutions LLC, will discuss the current status of Central Line dressing maintenance in hospitals today, review the pilot program details, and demo the solution. They will also answer questions and discuss how hospitals joining this effort will benefit.

Synaptiq’s solution to assess Central Line dressings using Microsoft Technologies
Synaptiq’s Machine Vision Pilot Program for Central Line Dressing Maintenance is an example of how Microsoft Cloud for Healthcare can rapidly deliver a machine vision application that works seamlessly with care teams to help provide superior patient experiences.
We see many benefits, such as:
- Hospitals in the pilot program will have an exclusive early adopter opportunity to test the solution first-hand, and their care teams will be able to help design the future solution that best meets their needs.
- It is powered by Microsoft Cloud for Healthcare and leverages many Microsoft technologies that are already licensed by most major hospital systems in the United States.
- There are three applications that are part of the solution that support this process: The Central Line Assessment app (Microsoft PowerApps); CLABSI Prevention Team (Microsoft Teams); and Central Line Maintenance dashboard (Microsoft Power BI)
- The Central Line Assessment app runs on a smartphone for convenient bedside access and is used to capture and analyze photos of patients’ dressings. If a potential compliance issue is identified, the care team is alerted to take action. Over time, data from the provider’s electronic medical record (EMR) system accumulates information from the Central Line Assessment app and the patient’s medical record, and the Central Line Maintenance dashboard provides canned reports and ad hoc analysis capabilities to identify trends.
- Most importantly, Synaptiq’s Pilot Program for this solution is an example of how Microsoft Cloud for Healthcare can rapidly deliver a machine vision application that works seamlessly with care teams to help provide superior patient experience – and help save lives.
Come join us to hear how this hospital pilot program will work and how your organization can get involved.
This session will be on May 11th at 11:00 PT / 12:00 MT/ 1:00 CT / 2:00 ET
Please click here to join or download the calendar invite here
As always, we will record the session and post the recording afterward for future consumption. We have a new landing page for this series, so favorite or follow https://aka.ms/VoicesofHealthcareCloud to make sure you never miss a future session.
Please follow the aka.ms/HLSBlog for all this great content.
by Contributed | Apr 26, 2022 | Technology
This article is contributed. See the original author and article here.
You’ve probably been told that Azure Synapse is just for very large data projects. Which is true. It is designed for limitless storage and super powerful compute. But there are ways to start with smaller datasets and grow from there by integrating new data engines to the workspace. In this episode of Data Exposed: MVP Edition with Armando Lacerda and Anna Hoffman, you will learn how to tailor Synapse to your data volume profile and position your cloud data pipeline for growth and expansion when needed.
About Armando Lacerda:
Armando Lacerda is a 30+ years computer geek. He’s been working with SQL Server since version 6.5, Azure SQL DB since 2010 and Azure SQL DW / Synapse Dedicated SQL pool since 2017. As an independent contractor he has helped multiple companies to adopt cloud technologies and implement data pipelines at scale. Armando also contributes with multiple local user groups around the Bay Area in San Francisco/CA and around the world. He has presented in multiple conferences on data platform topics as well as Microsoft certification prep. You can also find him riding his motorcycle up and down highway 1.
About MVPs:
Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real-world problems. MVPs make up a global community of over 4,000 technical experts and community leaders across 90 countries/regions and are driven by their passion, community spirit, and quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart. Learn more: https://aka.ms/mvpprogram
Resources:
Linked services in Azure Data Factory and Azure Synapse Analytics
Create an Azure AD user from an Azure AD login in SQL Managed Instance
Configure and manage Azure AD authentication with Azure SQL
Recent Comments