by Contributed | Apr 14, 2023 | Technology
This article is contributed. See the original author and article here.

Migrating databases. When your databases anchor applications, employees, and customers, migration plunks a big rock in the pond that is your business, creating lasting ripples. Ripples that force thoughtful, coordinated action across business units to prevent outages. Add moving to the cloud into the equation and the potential tide of complications rises.
In the past, undertaking migration often required you to refactor your databases to make them compatible with cloud database servers. Security policies needed thorough analysis and updating for cloud models. Factor in potential downtimes, and migration once loomed as a labor-intensive hit to business productivity. This made migrating an on-premises SQL Server to the cloud an ROI evaluation against the once heavy-lift tradeoffs required by moving to the cloud.
Azure SQL Managed Instance eliminates these tradeoffs with the broadest SQL Server engine compatibility offering available. Because of the compatibility level, Azure SQL Managed Instance doesn’t require you to compromise much, if anything, to deploy SQL Server within a fully managed aPlatform as a Service (PaaS) environment. Because the SQL Server 2022 Enterprise Edition and Azure SQL Managed Instance engines are almost one hundred percent identical, making it possible to migrate both ways. You don’t need to refactor databases or risk negative performance impacts to migrate to Azure SQL Managed Instance. Azure SQL Managed Instance is also compatible with earlier SQL Server editions.
Additionally, the storage engine format alignment between Azure SQL Managed Instance and SQL Server 2022 provides an easy way to copy or move databases from Azure SQL Managed Instance to a SQL Server 2022 instance. A back up from Azure SQL Managed Instance can be restored on SQL Server 2022. The standalone SQL Server can be hosted on-premises, on virtual machines in Azure, or in other clouds. This puts you in control of your data, ensuring data mobility regardless of SQL Server location.
Compatibility between the Azure SQL Managed Instance and SQL Server 2022 engines extends to the database engine settings and the database settings. As with the standalone version of SQL Server, with Azure SQL Managed Instance, you decide what server configuration best serves your business needs. To get started with Azure SQL Managed Instance, you choose the performance tier, service tier, and the reserved compute, along with data and backup storage options, that make the most sense for your applications and your data.
Since Azure SQL Managed Instance is built on the latest SQL Server engine, it’s always up to date with the latest features and functionality, including online operations, automatic plan corrections, and other enterprise performance enhancements. A comparison of the available features is explained in Feature comparison: Azure SQL Managed Instance versus SQL Server.
Azure SQL Managed Instance offers two performance tiers:
- General purpose - for applications with typical performance, I/O latency requirements, and built-in High Availability (HA).
- Business Critical - for applications requiring low I/O latency and higher HA requirements. It provides two non-readable secondaries and one readable secondary along with the readable/writable primary replica. The readable secondary allows you to distribute reporting workloads off your primary.
Once you’re running on Azure SQL Managed Instance, changing your service tier (CPU vCores or reserved storage changes) occurs online and incurs little to no downtime. To optimize performance of transaction processing, data ingestion, data load, and transient data, leverage In-Memory OLTP, available in the Business Critical tier.
Migrate from your on-premises SQL Server to Azure SQL Managed Instance with ease. Leverage the fully automated Azure Data Migration Service or set up an Azure SQL Managed Instance link. The link feature uses an expanded version of distributed availability groups to extend your on-prem SQL Server availability group to Azure SQL Managed Instance safely, replicating data in near real-time. With Azure SQL Managed Instance link feature, you can migrate, test, and then perform a simple, controlled fail-over.
While Azure SQL Managed Instance provides nearly one hundred percent compatibility with SQL Server, you will notice some changes in the transition from SQL Server standalone editions. These differences are based on architectural dissimilarities. Certain SQL features (audits, for instance) operate in a fashion that optimizes cloud architecture. Cloud architecture is designed to maximize resource utilization and minimize costs while ensuring high levels of availability, reliability, and security. Azure architecture leverages resource sharing while guaranteeing security and isolation. This resource sharing provides you with a flexible environment that can scale rapidly in response to customer needs. Because high availability is built into Azure SQL Managed Instance, it cannot be configured or controlled by users as it can be in SQL Server 2022.
Explore the differences between SQL Server 2022 Enterprise Edition and Azure SQL Managed Instance.
What’s more, Azure SQL Managed Instance is backed by Intel® Xeon® Scalable processors, ensuring that your environment is performant and secure from the silicon layer. With 8 to 40 powerful cores and a wide range of frequency, feature, and power levels Intel® Xeon® Scalable processors are a part of an end-to-end solution for your data.
Come delve into Azure SQL Server Managed Instance. The depth and breadth of SQL engine compatibility provides you with a safe, simple, full-featured, and flexible migration path. Azure SQL Managed Instance puts you in complete control of your data, your databases, your performance, and your business. Microsoft’s continued commitment to improvement means you can take advantage of the benefits of the cloud with Azure SQL Managed Instance and modernize your on-premises SQL Server databases.
Dive deeper into the benefits of migrating to Azure SQL Managed Instance. Check out the on-demand recording: https://www.mssqltips.com/sql-server-video/932/modernize-your-apps-with-azure-sql-managed-instance/Modernize Your Apps with Azure SQL Managed Instance.
by Contributed | Apr 13, 2023 | Technology
This article is contributed. See the original author and article here.
Modern applications require the capability to retrieve modified data from a database in real time to operate effectively. Usually, developers need to create a customized tracking mechanism in their applications, utilizing triggers, timestamp columns, and supplementary tables, to identify changes in data. The development of such applications typically requires significant effort and can result in schema updates resulting in considerable performance overhead.
Real-time data processing is a crucial aspect of nearly every modern data warehouse project. However, one of the biggest hurdles to overcome in real-time processing solutions is the ability to ingest efficiently and effectively, process, and store messages in real-time, particularly when dealing with high volumes of data. To ensure optimal performance, processing must be conducted in a manner that does not interfere with the ingestion pipeline. In addition to non-blocking processing, the data store must be capable of handling high-volume writes. Further challenges such as the ability to quickly act on the data, generating real-time alerts or business needs where dashboard that needs to be updated in real-time or near real-time. In many cases, the source systems utilize traditional relational database engines, such as MySQL, that do not offer event-based interfaces.
In this series of blog posts, we will introduce an alternative solution that utilizes an open-source tool Debezium to perform Change Data Capture (CDC) from Azure Database for MySQL – Flexible Server with Apache Kafka writes these changes to the Azure Event Hub, Azure Stream Analytics perform real time analytics on the data stream and then write to Azure Data Lake Storage Gen2 for long-term storage and further analysis using Azure Synapse serverless SQL pools and provide insights through Power BI.
Azure Database for MySQL – Flexible Server is a cloud-based solution that provides a fully managed MySQL database service. This service is built on top of Azure’s infrastructure and offers greater flexibility. MySQL uses binary log (binlog) to record all the transactions in the order in which they are committed on the database. This includes changes to table schemas as well as changes to the rows in the tables. MySQL uses binlog mainly for purposes of replication and recovery.
Debezium is a powerful CDC (Change Data Capture) tool that is built on top of Kafka Connect. It is designed to stream the binlog, produces change events for row-level INSERT, UPDATE, and DELETE operations in real-time from MySQL into Kafka topics, leveraging the capabilities of Kafka Connect. This allows users to efficiently query only the changes from the last synchronization and upload those changes to the cloud. After this data is stored in Azure Data Lake storage, it can be processed using Azure Synapse Serverless SQL Pools. Business users can then monitor, analyse, and visualize the data using Power BI.
Solution overview
This solution entails ingesting MySQL data changes from the binary logs and converting the changed rows into JSON messages, which are subsequently sent to Azure Event Hub. After the messages are received by the Event Hub, an Azure Stream Analytics (ASA) Job distributes the changes into multiple outputs, as shown in the following diagram.

End-to-end serverless streaming platform with Azure Event Hubs for data ingestion
Components and Services involved
In this blog post, following are the services used for streaming the changes from Azure Database for MySQL to Power BI.
- A Microsoft Azure account
- An Azure Database for MySQL Flexible server
- A Virtual Machine running Linux version 20.04
- Kafka release (version 1.1.1, Scala version 2.11), available from kafka.apache.org
- Debezium 1.6.2
- An Event Hubs namespace
- Azure Stream Analytics
- Azure Data Lake Storage Gen2
- Azure Synapse Serverless SQL pools
- A Power BI workspace
Dataflow
The following steps outline the process to set up the components involved in this architecture to stream data in real time from the source Azure Database for MySQL flexible Server.
- Provisioning and configuring Azure Database for MYSQL- Flexible Server & a Virtual Machine
- Configure and run Kafka Connect with a Debezium MySQL connector
- Reading CDC Messages Downstream from Azure Event Hub and capture data in an Azure Data Lake Storage Gen2 account in Parquet format
- Create External Table with Azure Synapse Serverless SQL Pool
- Use Serverless SQL pool with Power BI Desktop & create a report.
- Build real-time dashboard with Power BI dataset produced from Stream Analytics
Each of the above steps is outlined in detail in the upcoming sections.
Prerequisites
Provisioning and configuring Azure Database for MYSQL- Flexible Server & a Virtual Machine
It is important to create an Azure Database for MySQL Flexible Server instance and a Virtual Machine as outlined below before proceeding to the next step. To do so, perform the following steps:
- Create an instance of Azure Database for MySQL – Flexible Server
- Under server parameters blade, configure binlog_expire_logs_seconds parameter, as per your requirements (e.g.: 86400 seconds for 24Hrs) on the server to make sure that binlogs are not purged quickly. For more information, see How to Configure server parameters.
- Under the same server parameter blade, also configure and set binlog_row_image parameter to a value of FULL.
- Use a command line client or download and install MySQL Workbench or another third-party MySQL client tool to connect to the Azure Database for MySQL Flexible Server.
- Create an Azure VM in the same resource group running Linux version 20.04.
- Maintain enough disk space on the Azure VM to copy binary logs remotely.
- For this example, the “orders_info” table has been created in Azure Database for MySQL Flexible Server

Configure and run Kafka Connect with a Debezium MySQL connector
To track row-level changes in response to insert, update and delete operations in database tables, Change Data Capture (CDC) is a technique that you use to track these changes, Debezium is a distributed platform that provides a set of Kafka Connect connectors that can convert these changes into event streams and send those events to Apache Kafka.
To set up Debezium & Kafka on a Linux Virtual Machine follow the steps outlined in: CDC in Azure Database for MySQL – Flexible Server using Kafka, Debezium, and Azure Event Hubs – Microsoft Community Hub
Reading CDC Messages Downstream from Event Hub and capture data in an Azure Data Lake Storage Gen2 account in Parquet format
Azure Event Hubs is a fully managed Platform-as-a-Service (PaaS) Data streaming and Event Ingestion platform, capable of processing millions of events per second. Event Hubs can process, and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters. Azure Events Hubs provides an Apache Kafka endpoint on an event hub, which enables users to connect to the event hub using the Kafka protocol.
Configure a job to capture data
Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2.
- In the Azure portal, navigate to your event hub.
- Select the event hub created for the “orders_info” table.
- Select Features > Process Data, and then select Start on the Capture data to ADLS Gen2 in Parquet format card.

3. Enter a name to identify your Stream Analytics job. Select Create.


4. Specify the Serialization type of your data in the Event Hubs and the Authentication method that the job will use to connect to Event Hubs. Then select Connect.

5. Then the connection is established successfully, you’ll see:
- Fields that are present in the input data. You can choose Add field or you can select the three dots symbol next to a field to optionally remove, rename, or change its name.
- A live sample of incoming data in the Data preview table under the diagram view. It refreshes periodically. You can select Pause streaming preview to view a static view of the sample input.

- Select the Azure Data Lake Storage Gen2 tile to edit the configuration.
- On the Azure Data Lake Storage Gen2 configuration page, follow these steps:
a. Select the subscription, storage account name and container from the drop-down menu.
b. After the subscription is selected, the authentication method and storage account key should be automatically filled in.
c. For streaming blobs, the directory path pattern is expected to be a dynamic value. It’s required for the date to be a part of the file path for the blob – referenced as {date}. To learn about custom path patterns, see to Azure Stream Analytics custom blob output partitioning.

d. Select Connect
- When the connection is established, you’ll see fields that are present in the output data.
- Select Save on the command bar to save your configuration.
- On the Stream Analytics job page, under the Job Topology heading, select Query to open the Query editor window.
- To test your query with incoming data, select Test query.
- After the events are sampled for the selected time range, they appear in the Input preview tab.

- Stop the job before you make any changes to the query for any desired output. In many cases, your analysis doesn’t need all the columns from the input stream. You can use a query to project a smaller set of returned fields than in the pass-through query.
- When you make changes to your query, select Save query to test the new query logic. This allows you to iteratively modify your query and test it again to see how the output changes.

- After you verify the results, you’re ready to Start the job.
- Select Start on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window:
- Choose the output start time.
- Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see Streaming Units in Azure Stream Analytics.
- In the Choose Output data error handling list, select the behavior you want when the output of the job fails due to data error. Select Retry to have the job retry until it writes successfully or select another option.

- verify that the Parquet files are generated in the Azure Data Lake Storage container.

Create External Table with Azure Synapse Serverless SQL Pool
- Navigate to Azure Synapse Analytics Workspace. Select Data -> Linked -> Navigate to the ADLS gen 2 (folder path)
- Select the file that you would like to create the external table from and right click -> New SQL Script -> Create External table

3. In the New External Table, change Max string length to 250 and continue

4. A dialog window will open. Select or create new database and provide database table name and select Open script

5. A new SQL Script opens, and you run the script against the database, and it will create a new External table.
6. Making a pointer to a specific file. You can only point to folder not the files too
7. Point to enriched folder in Data Lake Storage

8. Save all the work by clicking Publish All
9. Verify the external table created in Data -> Workspace -> SQL Database

External tables encapsulate access to files making the querying experience almost identical to querying local relational data stored in user tables. Once the external table is created, you can query it just like any other table:
SELECT TOP 100 * FROM dbo.orders_info
GO
SELECT COUNT(*) FROM dbo.orders_info
GO
10. END
Use serverless SQL pool with Power BI Desktop & create a report
- Navigate to Azure Synapse Analytics Workspace. Starting from Synapse Studio, click Manage.

2. Under External Connections, click Linked services. Click + New. Click Power BI and click Continue.

3. Enter a name for the linked service and select an existing workspace which you want to use to publish. Provide any name in the “Name” field. Then you will see Power BI linked connection with the name.
4. Click Create.

5. View Power BI workspace in Synapse Studio
- After your workspaces are linked, you can browse your Power BI datasets, edit/create new Power BI Reports from Synapse Studio.
- Navigate to develop hub. Create Power BI linked service will be here.
- Expand Power BI and the workspace you wish to use.

6. New reports can be created clicking + at the top of the Develop tab. Existing reports can be edited by clicking on the report name. Any saved changes will be written back to the Power BI workspace.

Summary
Overall, Debezium, Kafka Connect, Azure Event Hubs, Azure Data Lake Storage, Azure Stream Analytics, Synapse SQL Serverless, and Power BI work together to create a comprehensive, end-to-end data integration, analysis, and visualization solution that can handle real-time data streams from databases, store them in a scalable and cost-effective manner, and provide insights through a powerful BI tool.
To learn more about the services used in this post, check out the following resources:
by Contributed | Apr 12, 2023 | Technology
This article is contributed. See the original author and article here.
As a result of the March 1st order from the Government of Egypt, Daylight Saving Time (DST) in the Arab Republic of Egypt will resume from the last Friday of April. Hence, clocks will be set forward by an hour at 12:00 AM on April 28, 2023.
The April 2023 Monthly quality update for Windows includes the following time zone update for Egypt:
- Clocks will be set forward by an hour at 12:00 a.m. on April 28, 2023, for the Egypt time zone.
Here are the KB article numbers that contain the Egypt DST fix for different versions of Windows:
For Microsoft’s official policy on DST and time zone changes, please see Daylight saving time help and support. For information on how to update Windows to use the latest global time zone rules, see How to configure daylight saving time for Microsoft Windows operating systems.
by Contributed | Apr 11, 2023 | Technology
This article is contributed. See the original author and article here.
Today, I worked on a service request that our customer is trying to change ALLOW_SNAPSHOT_ISOLATION to OFF, during this process we found that this process is taking too much time and I would like to share my lesson learned here.
Based on the documentation that we have sys.dm_db_wait_stats (Azure SQL Database) – SQL Server | Microsoft Learn , DISABLE_VERSIONING “Occurs when SQL Server polls the version transaction manager to see whether the timestamp of the earliest active transaction is later than the timestamp of when the state started changing. If this is this case, all the snapshot transactions that were started before the ALTER DATABASE statement was run have finished. This wait state is used when SQL Server disables versioning by using the ALTER DATABASE statement.”
So, I understand that if I have a previous transaction open before running the alter database [jmjuradotestdb2] SET ALLOW_SNAPSHOT_ISOLATION off even if the transaction is ocurring in another database in the same Elastic Pool.
In order to reproduce the issue, let’s try to run a transaction but without closing it in the database jmjuradotestdb1.
begin transaction
create table table1 (id int)
insert into table1 (id) values (1)
In another session, we are going from master to execute the following query to change to off the snapshot.
alter database [jmjuradotestdb2] SET ALLOW_SNAPSHOT_ISOLATION off
Once, I executed the second query, I started seeing running the query select * from sys.dm_os_waiting_tasks where wait_type = ‘DISABLE_VERSIONING’

Every 2 seconds (more or less), SQL Server is waiting for previous transactions to close. Even if I run Lesson Learned #22: How to identify blocking issues? – Microsoft Community Hub I’m not able to see any blocking issues.
So, once the previous transaction is committed, we are going to be able to change the status of the database.
Enjoy!
by Contributed | Apr 10, 2023 | Technology
This article is contributed. See the original author and article here.
The connections established to Azure SQL Database from applications or client tools may be unexpectedly terminated and impact user environments due to internal (System) maintenance work, client networking, application related or other health related issues. In this blog post, we will cover recommended steps to identify connection failures to your database and best practices to handle these failures using resources & tools available for Azure SQL Database.
Understanding the connection Problem!
When it comes to understanding a problem, we need to know where the issue is occurring, whether this is happening on the client side, or it is an issue specific to the network layer or on the SQL database itself. Most of the time, the error messages the customer receives will tell exactly where the problem is and that’s the best place to start.
When you have issues related to accessing database, the best place to start with is the Azure Resource heath blade. Resource Health gives you a personalized dashboard of the health of your resources including your SQL Database. Currently, Resource Health for your SQL Database resource examines login failures due to system errors.
The health of a resource is displayed as one of the following statuses.
A status of Available (informational) means that Resource Health has not detected login failures due to system errors on your SQL resource.
This is an informational message. It doesn’t indicate a problem but provides interesting information to an operator, such as successful completion of a regular process.
A status of Degraded (warning) means that Resource Health has detected many successful logins, but some failures as well. These are most likely transient login errors.
The health status of Unknown (warning) indicates that Resource Health hasn’t received information about this resource for more than 10 minutes. Although this status isn’t a definitive indication of the state of the resource, it is an important data point in the troubleshooting process. If the resource is running as expected, the status of the resource will change to Available after a few minutes. If you’re experiencing problems with the resource, the Unknown health status might suggest that an event in the platform is affecting the resource.
Degraded and Unknown status are warnings and should be ignored if availability is not impacted for customers.
A status of Unavailable (Critical) means that Resource Health has detected consistent login failures to your SQL resource. If your resource remains in this state for an extended period, contact support.
This is a critical message indicating loss of service or application availability or severe degradation of performance. Requires immediate attention.
Below image is an example of what you see if the database is reported as degraded in the Resource health blade.

SQL Connectivity Issues
Looking at the SQL side of connectivity issues, we often see customers not able to reach their database because of an availability issue.
This situation manifests itself in the form of the following error message.
Database x on server y is not currently available (Error 40613)
Most of the time, these issues are transient, meaning the underlying cause soon resolves itself. An occasional cause of transient errors is when there are unexpected events, such as software crash or a hardware failure that might cause failover or if the Azure system quickly shifts hardware resources to better load-balance various workloads. Most of these reconfiguration events finish in less than 60 seconds. During this reconfiguration time span, you might have issues with connecting to your Azure SQL Database.
Other common causes include planned maintenance to deploy software upgrades and other system enhancements. This usually occurs fewer than two times a month. One can enable Advance Notifications (preview) to be sent up to 24 hours before any planned event.
Applications that connect to your database should be built to expect these transient errors. To handle them, implement Retry Logic in the code instead of surfacing them to users as application errors.
In some situations, the customer uses the retry logic, but still sees the error messages. The reason is that they do not follow the principles of retry.
Principles for retry
- SQL SELECT statement should be retried after a fresh connection has been established.
- Logic must ensure that either the entire database transaction completed, or rolled back
- Applications using 3rd party middleware – confirm that the retry logic is enabled by vendor.
- Minimum 5 secs interval between retries. *
* When connection pooling is enabled, and if a timeout error or other login error occurs, an exception will be thrown and subsequent connection attempts will fail for the next five seconds, the “blocking period”. If the application attempts to connect within the blocking period, the first exception will be thrown again. Subsequent failures after a blocking period ends will result in a new blocking period that is twice as long as the previous blocking period, up to a maximum of one minute.
For a discussion of the blocking period for clients that use ADO.NET, see Connection pooling (ADO.NET).
Sometimes the transient issues can also become non transient where we lose connections for a few minutes or a few hours.
For example: A plannedunplanned event interrupted a long running transaction (such as large bulk insert operations or index build operations against a large table). The chances are it takes longer time for performing the recovery operations. The longer the recovery operation, the longer the availability issue the customer is going to face. Connectivity can only resume after the recovery is completed.
Another scenario, which customers experience is resource limit being reached, where the request hits the threshold limit assigned for the service tier.
This is usually seen in the DTU based service tiers where Azure SQL Database limits the number of concurrent sessions to the database and one has an increased workload.
We can monitor the session count using Azure Metrics as shown below

If your application uses connection pooling, a slowdown in query response time might cause a constant rate of front-end requests to require more back-end database connections. If a live incident is ongoing where the worker limit has been approached or reached, you may receive Error 10928 when you connect using SQL Server Management Studio (SSMS) or Azure Data Studio. One session can connect using the Diagnostic Connection for Database Administrators (DAC) even when the maximum worker threshold has been reached.
DAC was built to help you connect and run basic troubleshooting queries in cases of serious performance problems. It allows us to connect to the database despite the limit has been reached.
Note: Only one Dedicated admin connection is available for us to log in and check the database.
You should not try connecting to DAC using the object browser in SSMS because it creates multiple connections for getting the graphical view of an instance.
In the SSMS, go to File -> New -> Database Engine Query and try specifying the DAC connection.
Prefix server name with ADMIN: as shown below

Click on Options -> Connection Properties and specify the database that you are connecting to.
Click on connect, and you can connect to Azure SQL DB with a DAC connection. In the connection bar, you can see, we are connected to Azure SQL Server using Admin: prefix, i.e. DAC connection. Now you can run dmv’s for any diagnostic information.
To remediate from this issue,
- One can scale the database to a larger service tier that can handle the workload. Switch to an elastic pool or vCore-based purchasing models, which removes the session limit (increasing it to 30000)
Another issue that we observe is Dropped connections to Azure SQL database:
When you connect to an Azure SQL Database, idle connections may be terminated by a network component (such as a firewall) after a period of inactivity. There are two types of idle connections, in this context:
- Idle at the TCP layer, where connections can be dropped by any number of network devices.
- Idle by the Azure SQL Gateway, where TCP keepalive messages might be occurring (which makes the connection not idle from a TCP perspective), but not had an active query in 30 minutes. In this scenario, the Gateway will determine that the TDS connection is idle at 30 minutes and terminates the connection.
To avoid the Gateway terminating idle connections, you can use the Redirect connection policy instead of proxy to configure your Azure SQL data source.
Note: The recommendation is to use redirect connection policy for improved performance.
To avoid dropping idle connections by a network component, set the following registry settings or their non-Windows equivalents on the operating system where the driver is loaded:

Restart the computer for the registry settings to take effect.
Also have the latest client drivers ( JDBC, OLEDB and so on ) installed.
We also see communication link failures on the network side. It usually manifests like the errors below.
- A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
- A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 – An existing connection was forcibly closed by the remote host.)
We get this situation when customers are not using the updated drivers. The recommendation for this situation is to have the updated driver, Which would support the TLS 1.2 version that’s just default for the SQL database.
We do also have an option for customers to use the lower version of TLS, but it is not recommended. We always recommend to use the updated version of the driver.
We also see this error if your administrator restricted certain algorithms on the client. The TLS protocols match between the client and the server but there are no matching TLS cipher suits.
Refer to the link below for more information
https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/connect/tls-exist-connection-closed
Another common issue that we see is customers getting confused between a connection timeout and query timeout. The connection or login timeout occurs when the initial connection to the database server reaches a predefined time-out period. At this stage, no query has been submitted to the server. Understanding the error messages and exceptions in this case, checking the class, if it is a SQL connection class or it’s a SQL command class would tell us whether this is a performance related issues or a connection timeout issue and then take necessary steps onward.
In the below screenshot, we can observe the difference in error messages between a connection timeout vs query timeout.


For troubleshooting query timeouts in Azure SQL Database, refer to the links below.
Lesson Learned #254: Checking Execution Timeouts in Azure Log Analytics – Microsoft Community Hub
Find query execution timeouts with Query Store
We can leverage the Connectivity checker tool for end-to-end checks and to understand the connection failures.
Azure/SQL-Connectivity-Checker:
The checker is a PowerShell script that automates a series of checks for the most common connectivity issues. It performs an end-to-end connection check. It checks for your certificates, gateway connections and then connects to your master database, user databases and then also performs pre-login checks and in addition, it also allows us to capture the network trace for it. This gives us a clear idea of where the connection is failing and if I’m not sure what to do, I can do the network capture and give it to the support team to handle this efficiently.
Most issues it detects come with recommended solutions. Instructions to run the script can be found in the above link.
Summary
- Identify the problem source.
- Understand Transient / Non-Transient connection issues
- Handle transient conditions using retry logic
- Use error stack/exception to understand timeout scenarios
- Consider using Redirect policy to prevent dropped connection scenarios and for performance improvement
- Use updated driver versions to match TLS version and avoid communication failures.
- Leverage the Connectivity checker tool for end-to-end check and to understand the connection failures.
Resources
Most of the above topics I covered are explained in these links. Please refer to them for a detailed explanation.
Use Azure Resource Health to monitor database health – Azure SQL Database | Microsoft Learn
Troubleshoot common connection issues to Azure SQL Database – Azure SQL Database | Microsoft Learn Understanding Connectivity issues in SQL Database | Data Exposed – YouTube
Recent Comments