by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
API connections resources work as the bridges for Logic App to communicate with other services. While most of them are used to connect to cloud resources, a few of them leverage On-premises Data Gateways to connect local data sources to Azure. However, we may find that it is not easy to check which On-premises Data Gateway (OPDG) is used by which API connection resource.
Objective:
Find out which On-premises Data Gateways are used by which API connection resources.
Workaround:
The most known way to check OPDG used by a connector is by inspecting the Logic App designer.
- When creating API connections with OPDG, we can find the OPDG used from the connection list in the Logic App designer.

However, this method seems unhandy as we need to go every Logic App’s designer mode and may cause unwanted change. My preferred way is to check the sourced OPDG from the JSON definition of an API connection resource.
- From a Logic App, open the API connection blade, and choose the API connection which is using the OPDG.

- From the API connection resource main page, click on “JSON View” on the far right.

- In the JSON definition of the API connection, you will find the OPDG’s name, resource ID in the “properties” > “parameterValues” > “gateway”.

One more step:
As we have found out that OPDG information can be extracted from the JSON definition of an API connection resource. We can use a PowerShell script to find out all API connections which is using an OPDG.
Prerequisites:
Run the script:
- After downloading the script into your machine, put in the name of your Azure subscription for the subscriptionName variable.

- Then run the script and log into your Azure account in the pop-up window.

- A CSV file named “output.csv” will be created in the same folder where the script resides.

- The CSV file will contain all API connection resources along with the OPDGs they are using.

Columns:
connectionName: name of the API connection resource
connectionId: resource ID of the API connection
gatewayName: name of the On-premises Data Gateway resource
gatewatyId: resource ID of the On-premises Data Gateway
Notes:
- Alternative to run the script locally, you may choose to run this through Azure cloud shell. However, please remember the first command “Connect-AzAccount” as it will no longer be required.
Raw script:
# If you encountered "script is not signed error", please use below comand in the PowerShell first.
# Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
# Login to your Azure Account:
Connect-AzAccount
# Put in your own subscription name:
$subsctionName = ''
Set-AzContext -SubscriptionName $subsctionName
# Get all API connection resources
$connectionList = Get-AzResource -ResourceType 'Microsoft.Web/connections'
# Loop through connection list
Write-Host '---------------------------------------'
Write-Host ' Start loop connections - Processing'
Write-Host '---------------------------------------'
$result = @()
foreach ($connection in $connectionList){
$currentConnectionObj = Get-AzResource -ResourceId $connection.ResourceId
$connectionName = $currentConnectionObj.Name
$connectionId = $connection.ResourceId
Write-Host 'Processing connection:' $connectionName
# Extract gateway information
$opdgName = $currentConnectionObj.Properties.parameterValues.gateway.name
$opdgId = $currentConnectionObj.Properties.parameterValues.gateway.id
if (!($opdgId -eq $null)){
Write-Host '------------------------------------------------'
Write-Host 'OPDG found:'
Write-Host 'Connection' $connectionName 'is using gateway' $opdgName
Write-Host '------------------------------------------------'
$new = [PSCustomObject]@{
connectionName = $connectionName
connectionId = $connectionId
gatewayName = $opdgName
gatewayId = $opdgId
}
$result += $new
}
}
# Write collected information to output.csv file
$result | Export-Csv -Path .output.csv -NoTypeInformation
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
Note: Please take extreme caution before making any changes in Production. Make sure you test the changes in a test environment first.
Recently due to a spate of updates to various endpoints in SharePoint, Azure, and the AAD auth login endpoints, we are seeing projects compiled with version of .NET before 4.6 cause TLS errors which don’t always show as TLS errors in the PHA.
The error messages The underlying connection was closed” or “System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host” you are seeing is mainly due to deprecation of TLS1 1.0 & 1.1. Please see:
Preparing for TLS 1.2 in Office 365 and Office 365 GCC – Microsoft 365 Compliance | Microsoft Docs
Enable TLS 1.2 on servers – Configuration Manager | Microsoft Docs
TLS 1.0 and 1.1 deprecation – Microsoft Tech Community
The updates were communicated in the Office 365 message center.
- MC218794 – July 17, 2020 | TLS 1.0 and 1.1 retirement date in Office 365 to be October 15, 2020
- MC240160 – Feb 16, 2021 | Reminder: Disabling TLS 1.0 and TLS 1.1 in Microsoft 365
If the PHA app web is hosted on a remote physical server, then.
3 ways you can resolve the error:
1] You can either update applications web.config file and update httpRuntime to 4.7 example:
<httpRuntime targetFramework=”4.7″/>
Or
2] You can add the following registry key settings on your remote app web server(s):
[HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv4.0.30319]
“SystemDefaultTlsVersions” = dword:00000001
“SchUseStrongCrypto” = dword:00000001
Note: You may need to restart your server(s)
Or
3] Add this one line of code above each instantiation of the ClientContext in your code:
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
Note: Extensive code updates will be needed.
If the App web is hosted on Azure, then.
Log into the Azure portal (portal.azure.com) with an account with admin rights on the web app in question.
Once you open the App Services and select the web app hosting the PHA site, click on the App Service Editor (Preview).

This will open the editor, then select the web.config file and change the circled targetFramework attribute to 4.7 preferably (any setting higher than 4.6 will work too). Note the status in the upper right will say ‘DIRTY’ for a bit, then it should auto-save and change back to ‘SAVED’-

At this point you should be set, refresh the page with the PHA and all should be good.
if the application is a Azure WebJob, then.
You will have to re-target/re-compile the app to 4.6+ (recommend 4.7) and re-upload it to fix it.
You can’t use the config file for the exe to re-target the same way we can for a web application.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
The Windows Recovery Environment (WinRE) is a companion operating system installed alongside Windows 10, typically in a separate partition, that can help with troubleshooting, recovery, or booting from external media, such as a USB stick. WinRE is also used during the Windows update process to apply updates in specific paths or phases. (This process is sometimes referred to as SafeOS.)
In this post, we’re going to walk you through the tools in WinRE, offer tips and tricks for using it effectively, and, while we’re at it, clear up common misconceptions around WinRE. We’ll also show how WinRE can enable a Windows 10 device that might have issues starting or applying the latest updates, get back to a good state.
An overview of WinRE
When talking to IT pros, enthusiasts, and other Microsoft employees, I like to refer to WinRE as the “blue screen of life.” Some agree with me out of politeness, but I believe the phrase resonates with many people because, at the end of the day, WinRE is typically used to fix something.
WinRE is almost always located in a separate partition that immediately follows the main Windows partition. (For more details on the default partition layout for UEFI-based PCs, see our partition layout documentation.)
If you are familiar with Windows PE (WinPE), this may sound similar. Think of WinPE as the base OS and WinRE as the user interface with some recovery tools added.
When you launch WinRE, you may see these options: Continue (exit WinRE and continue to Windows 10), Use a device (use a USB drive, network connection, or Windows recovery DVD), Troubleshoot (reset your PC or use advanced options), and Turn off your PC.

Figure 1. Initial WinRE menu options
Accessing WinRE
You can access WinRE via multiple entry points:
- From the Start menu, select Power then select and hold down the Shift key while selecting Restart.
- In Windows 10, select Start > Settings > Update & Security > Recovery. Under Advanced startup, select Restart now.
- By booting to recovery media.
- A hardware recovery button (or button combination) configured by the OEM.
Here are two lesser-known ways to access WinRE:
- Use REAgentC. From Windows 10, run a command prompt as an administrator, then type reagentc /boottore. Restart the device and it will load WinRE instead of Windows 10.
- The shutdown command also has an entry point for WinRE. Run command prompt as an administrator and type shutdown /r /o.
Okay, so now that you’re in WinRE, what can you do?
Depending on how you started WinRE, the options may differ. For example, if you boot from a USB stick, the Reset this PC option does not appear. You can continue to Windows 10 or turn off your PC.
If the options presented are not desired, select Continue to start Windows 10. Likewise, the Turn off your PC option will shut down the device. If you have multiple operating systems installed, this would also be where you could select which OS to boot.
Use a device
This option helps you boot from another source such as USB, DVD, or even a network connection. This can be helpful as it is simpler than some BIOS/Unified Extensible Firmware Interface (UEFI) menus and provides a consistent experience.
Troubleshoot
WinRE has many troubleshooting tools to help you get devices back to a good state quickly and easily. For more information, see Use WinRE to troubleshoot common startup issues.
Now, let’s explore the options available in the Troubleshoot category.
Reset this PC
Resetting a PC reinstalls Windows 10, but lets you choose whether to keep your files or remove them before reinstalling Windows. Reset this PC is the most popular option and offers a few options. (You can also access Reset this PC from the Settings menu if you can successfully start Windows 10.) For more information on Reset this PC, see Recovery options in Windows 10. I’ll also dive deeper into Reset this PC in future posts, so stay tuned!

Figure 2. The Advanced options menu
Startup Repair
If Windows fails to start twice, the third attempt will run Startup Repair automatically. This is also available as an option on the advanced options page of troubleshooting. Startup Repair can help fix a corrupt master boot record (MBR), partition table, or boot sector. Beginning with Windows 10, version 1809, automatic Startup Repair also removes the most recently installed update if that installation immediately preceded the startup failure.
Startup Settings
Another troubleshooting step is to change how Windows starts up. Enabling debugging or boot logging can help identify a specific issue. This is also where you can enable Safe Mode. Check out the support page on Startup Settings for more information.
Command Prompt
While not the most approachable tool for unfamiliar users, the command prompt is the most powerful and dynamic tool in WinRE. It can do everything from registry edits to copying files to running Deployment Image Servicing and Management (DISM) commands. Next, we’ll cover a few notable examples. (Note that while using Command Prompt in WinRE, it automatically runs with elevated administrator permissions.)
Copying files
If you need to copy a few files from your device before reinstalling Windows, using a USB drive or network share can be a quick way to do this using the command prompt within WinRE.
There is also a popular Notepad trick that makes it easier to copy files. I love it and wish I could take credit as it a big timesaver for visual people like me. !
Chkdsk
Running chkdsk is a common first step to help resolve issues. This process checks the file system and file system metadata of a volume for logical and physical errors. (While in WinRE, drive letters may be assigned differently than in Windows 10. You can run BCDEdit to get the correct drive letters.)
SFC
SFC scans and verifies the integrity of all protected system files and replaces incorrect versions with correct versions. If this command discovers that a protected file has been overwritten, it retrieves the correct version of the file, and then replaces the incorrect file.
Repair corrupt components using DISM
Windows system components may become corrupt from hard drive malfunctions or update issues. If you have a wired ethernet connection, you can turn on networking to then scan and repair corrupted content using Windows Update and the /Cleanup-Image switch in DISM.
From the command prompt, type wpeinit and Enter to turn on the networking stack then type the following command:
dism /image:c: /cleanup-image /restorehealth

Figure 3. Using DISM within the command prompt in WinRE
Registry Editor
Yes, you can use regedit within Windows RE, but warning: this is a powerful tool that can really mess things up if not used correctly! Keep in mind the X: drive is the WinRE OS and has its own registry. C: is likely where the Windows 10 registry is.

Figure 4. Launching the Registry Editor from the command prompt in WinRE
Uninstall Updates
Removing the latest updates recently installed to Windows 10 can be a good troubleshooting step if you are having trouble starting your PC or if you are having trouble uninstalling something within Windows 10. Simply select Uninstall latest quality update or Uninstall latest feature update to uninstall the desired update from WinRE with just a couple of steps. Please note; however, that you should reapply the latest updates as soon as possible to help keep your devices protected and productive.

Figure 5. The Uninstall Updates menu
UEFI Firmware Settings
This option makes it easy to boot into the UEFI menu to change firmware settings, especially as the many different devices offered by OEMs use different keyboard or button combinations to access UEFI. For example, some use volume buttons, function keys, the delete key, or a combination therein. WinRE provides a consistent method for accessing UEFI firmware settings across all devices.
System Restore and System Image Recovery
These two options can be useful if you are using legacy system restore features or image-based recovery. The System Restore option can restore your PC to a previous restore point which, for example, could undo the latest win32 application installation if you captured a restore point prior to the installation.
Recovery Drive
Creating a recovery drive is a built-in Windows ability that can easily create a USB drive with WinRE. Optionally, it can also “back up system files to the recovery drive,” a process that takes a while but will fully reinstall Windows onto that same system even if the hard drive is blank. This is often referred to as Bare Metal Recovery (BMR). When starting your system from a Recovery Drive, you will notice that is uses WinRE to perform the recovery. You will need to pick your language first and there will then be an option on the first screen to “Recover from a drive.”
Managing WinRE
WinRE can help devices recover back to Windows, but, if needed, it can be disabled by opening an elevated command prompt and using the reagentc /disable command. By disabling WinRE, however, some Windows 10 features may not work, including many outlined in this blog post. Additionally, WinRE will be re-enabled after a feature update is installed as it is a critical part of the update process.
Permission and authentication
If you are a Windows Insider taking flights, you may have seen a behavior change on Windows 10 Insider Preview Build 19536 and later, where the default authentication requirements for WinRE are different. Previously, most tools and actions required local admin credentials but were not able to use modern credentials like Windows Hello (face, PIN, fingerprint). This became a big problem with passwordless accounts and Azure Active Directory accounts that weren’t backed up with a local admin account. To address this, we changed the default behavior to no longer require local administrator authentication to access WinRE tools.
If you manage devices using mobile device management (MDM) solutions such as Microsoft Endpoint Manager, you can configure the Security/RecoveryEnvironmentAuthentication policy if you would like to require local administrator authentication to use WinRE components.
Other interesting things
Booting into RAM
WinRE can be installed in the same partition as Windows, but it is usually installed in the Recovery or WinRE tools partition. Regardless of where it is installed, it always runs in the RAM (or RAM disk boot) and is assigned the X: drive. This is important to note because any changes you make to X: will not be saved after a reboot.
BitLocker
Depending on the combination of your entry points to WinRE, how BitLocker is configured (auto-unlock), and what WinRE option you select, you may be prompted for the BitLocker recovery key to unlock your Windows partition. As an example, when launching a command prompt, you will be prompted to unlock BitLocker. If you skip the unlock process, the Windows installation drive will remain locked, but you will have access to the X: drive, where WinRE runs in RAM.
BootIM.exe
One potentially confusing thing about WinRE is the implementation of BootIM.exe. Most won’t see a difference, but I want to explain how this works. When using the entry point from the lock/login screen and holding the Shift key or Settings > Advanced Startup, Windows 10 is not rebooting into WinRE. Instead, it is entering pre-shutdown and showing the BootIM screen which looks exactly like WinRE. If you select an option that needs to change the boot flag like Use a device, it saves you one restart because you don’t have to start WinRE to then restart the system again using a device.
You can see this functionality by running BootIM.exe from an elevated command prompt, but a quick tip is to open Task Manager beforehand so you can easily use the Alt + Tab combination and end the task when you are done. If you are a Windows Insider taking flights, you will see a behavior change beginning with Windows 10 Insider Preview Build 19577 to enable greater accessibility within this experience. As the pre-shutdown behavior doesn’t support Narrator or other accessibility features, BootIM will only be used if WinRE isn’t available. Otherwise, the system will restart directly to WinRE and skip BootIM.
Partition layout
Prior to Windows 10, the recommended partition layout was to place the recovery partition before the OS partition. Unfortunately, this made it difficult to update WinRE. For Windows 10, we updated the recommendation to always list WinRE immediately after the OS partition. Depending on how Windows was originally installed, your partitions may not have this layout.
Even with the updated recommendation, OEMs have the flexibility to choose a different partition layout for specific solutions. When installing from USB, the legacy layout was still being used by some partners until it was formally updated in Windows 10, version 20H1. For example, in this screenshot from a newer laptop, the OEM put an Recovery Partition where it needs to go, last in line.

Figure 6. A Disk Management partition layout example
Recovery partition labeling
The WinRE partition can have any volume label or no label, but the most common labels are WinRE tools, WinRE, or Recovery. The label is subject to change upon update. The status could also say OEM Partition or Recovery Partition.
The WinRE partition shouldn’t have a drive letter assigned and any folders or files should not be available for editing. A common misunderstanding is that the recovery partition contains a compressed copy of Windows and that this copy is what is restored during the reset process. This is not the case. This partition only contains WinRE and drivers. The size can vary depending on how it was configured by the OEM (e.g., what languages and drivers were included).

Figure 7. An example of labels within the Disk Management partition
Conclusion
In closing, I hope this information has been helpful and helps provide greater understanding about WinRE. As we continue to improve, we are always interested in your ideas and feedback, and ask that you please share them in Feedback Hub (<– this is direct link to the Advanced startup category that includes WinRE).
If you have ideas or requests on other aspects of recovery you’d like us to share, please let us know in the comments section below.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
Introduction
The Azure Monitor for SAP solutions (AMS) team has announced new capabilities for AMS, including monitoring SAP NetWeaver metrics, OS metrics and enhanced High Availability cluster visualization. This blog post is an overview of the recent changes for the High Availability cluster provider for AMS and associated cluster workbook views.
The most important changes for the HA provider are in the workbook visualizations. You can now see:
- Location constraints that are left by “crm resource move” and “crm resource migrate” commands. These will change the operation of the cluster, and it’s useful to be reminded if they have been left in the cluster configuration.
- Historical node view. You can now see whether a cluster node is up, and is the “designated coordinator” for the cluster, for configurable time periods.
- Historical resource view. You can now see the failcount over time for individual cluster resources.
Prerequisites
To use AMS monitoring for your HA clusters, there are some requirements for your environment:
- One or more clusters of monitored Azure VMs or Azure Large instances.
- The OS for the cluster nodes currently should be SLES 12 or 15. Other OS options are in development.
- The Pacemaker cluster installation should be completed – there are instructions for this in Setting up Pacemaker on SLES in Azure – Azure Virtual Machines
- The monitored instances must be reachable over the network that AMS is deployed to. Also, the HA provider uses HTTP requests to the monitored instances to retrieve monitoring data, so this must be enabled.
The AMS team has tested monitoring several different types of cluster managed applications with the AMS HA provider, including
- SUSE Linux Network File System (NFS)
- SAP HANA
- IBM DB/2
- SAP Netweaver Central Services
Other managed applications and services should work with the AMS HA provider, but are not officially supported.
Setting up the HA Provider
The process for setting up the HA provider hasn’t changed, but here is a walkthrough of the process for setting up your cluster, AMS, and the HA cluster providers:
- Create your HA cluster in Azure, using the instructions linked above.
- Install the Prometheus ha_cluster_exporter in each of the cluster nodes, following the instructions here. For each instance, log onto the machine as root and install using the zypper package manager:
zypper install prometheus-ha_cluster_exporter
After the exporter is installed you can enable it (so it is automatically started on future reboots of the instance):
systemctl –now enable prometheus-ha_cluster_exporter
After this is done, it is useful to test that the cluster exporter is actually working. From another machine on the same network, you can test this using the Linux “curl” command (using the proper machine name). For example:
testuser@linuxjumpbox:~> curl http://hana1:9664/metrics
# HELP ha_cluster_corosync_member_votes How many votes each member node has contributed with to the current quorum
# TYPE ha_cluster_corosync_member_votes gauge
ha_cluster_corosync_member_votes{local=”false”,node=”hana2″,node_id=”2″} 1
ha_cluster_corosync_member_votes{local=”true”,node=”hana1″,node_id=”1″} 1
# HELP ha_cluster_corosync_quorate Whether or not the cluster is quorate
# TYPE ha_cluster_corosync_quorate gauge
ha_cluster_corosync_quorate 1
…

You should configure a HA cluster provider for each node of the cluster using the following information:
- Type – High-availability cluster(Pacemaker)
- Name – a unique name you give the cluster provider. I use a pattern of “ha-nodename” for this.
- Prometheus Endpoint – this is the same as the URL you used to test the cluster exporter above, and usually will be
http://nodeipaddr:9664/metrics
- SID – this is three-character abbreviation to identify the cluster – if this is an SAP instance, you should make this the same as the SAP SID
- Hostname – this is the hostname for the monitored node
- Cluster – this is the name of the monitored cluster – you can find this out by doing the following on a cluster instance:
hana1:~ # crm config show | grep cluster-name
cluster-name=hacluster
- When finished, select Add provider
As an example, here is a subscription with three clusters – one for NFS, one for SAP HANA, and one for SAP central services. The provider view looks like this:

Overview of the new views
Continuing with the example of the resource group with three clusters, here is the overall HA cluster status:

Here, you can tell that there are three clusters, but only one of them (the NFS cluster) is in a healthy state. The H10 cluster is in maintenance mode, which means the cluster is not managing cluster resources. The s40 cluster has had errors in the resource state.
Cli location constraints
When you select one of the cluster hexagons, you will see more information on that specific cluster. First, there is information on cli-ban or cli-prefer location constraints in the cluster configuration. These constraints are created by commands such as crm resource move or crm resource migrate. This is what you will see in the HA workbook if there are no such constraints:

If there are any of these constraints, you will see the constraint names:

You should remove these constraints after the resource movement has been completed using the “crm configure delete <constraint name>” command. If you do not, they will impact the expected operation of the cluster.
Node status over time
In the cluster view, you will see the current node status for each node in the cluster, and you will now also see the historical status of a particular node at the bottom:

The Time range and node are selectable in this view. This is useful to see when a particular node went offline from the standpoint of the cluster. It will also indicate which of the nodes is the clusters “designated coordinator”.
Resource status over time
The cluster view will show the current resource status for the cluster managed resources, and there is now a historical view of the failure counts for a selected resource:

Again, the time range and resource are selectable in this view. It shows the failure count and failure threshold for the selected resource – If the failure count reaches the threshold, the resource will be moved to another node. Any errors should be investigated and resolved if possible.
Resources
Here are some additional resource links for learning about Azure Monitor for SAP Solutions and providers for other monitoring information:
Feedback form:
Summary
We hope you will find the new visualizations helpful, and please let us know if you have ideas for any new features for Azure Monitor for SAP solutions.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
Spring has arrived, which means that RedisConf—the annual celebration of all things Redis—is almost here! Attending RedisConf is one of the best ways to sharpen your Redis skills by exploring best practices, learning about new features, and hearing from industry experts. You’ll also be able to virtually hang out with and learn from thousands of other developers passionate about Redis.
We love Redis here at Microsoft, so we’re excited to be showing up at RedisConf in a big way this year. We’ll not only be talking more about our new Azure Cache for Redis Enterprise offering, but we’ll also be hosting sessions and panels that dive deeper into the best ways to use Redis on Azure. Want to learn more? Here are seven reasons to attend RedisConf 2021:
- Explore live and on-demand training on how to use Redis with popular frameworks like Spring and .NET Core.
- Hear Microsoft CVP Julia Liuson present a keynote status update about the ongoing collaboration between Microsoft and Redis Labs, including the Enterprise tiers of Azure Cache for Redis.
- Listen to customers like Genesys, Adobe, and SitePro who are using Redis Enterprise on Azure for use-cases as diverse as IoT data ingestion and mobile push notification deduplication.
- Tune in for a roundtable discussion between the Microsoft and Redis Labs teams that touches on what the collaboration between the companies looks like and the benefits it brings to customers.
- Learn how to harness the power of Redis and Apache Kafka to crunch high-velocity time series data through the power of RedisTimeSeries.
- Hear from experts from our product team on the best way to run Redis on Azure, including tips-and-tricks for maximizing performance, ensuring network security, limiting costs, and building enterprise-scale deployments.

RedisConf kicks off on April 20th, and registration is free! Sign-up now to attend. We’ll see you there.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
It’s been a year since we open sourced MsQuic and a lot has happened since then, both in the industry (QUIC v1 in the final stages) and in MsQuic. As far as MsQuic goes, we’ve been hard at work adding new features, improving stability and more; but improving performance has been one of our primary ongoing efforts. MsQuic recently passed the 1000th commit mark, with nearly 200 of those for PRs related to performance work. We’ve improved single connection upload speeds from 1.67 Gbps in July 2020 to as high as 7.99 Gbps with the latest builds*.

* Windows Preview OS builds; User-mode using Schannel; and server-class hardware with USO.
** x-axis above reflects the number of Git commits back from HEAD.
Defining Performance
“Performance” means a lot of different things to different people. When we talk with Windows file sharing (SMB), it’s always about single connection, bulk throughput. How many gigabits per second can you upload or download? With HTTP, more often it’s about the maximum number of requests per second (RPS) a server can handle, or the per-request latency values. How many microseconds of latency do you add to a request? For a general purpose QUIC solution, all of these are important to us. But even these different scenarios can have ambiguity in their definition. That’s why we’re working to standardize the process by which we measure the various performance scenarios. Not only does this provide a very clear message of what exactly is being measured and how, but it has also allowed for us to do cross-implementation performance testing. Four other implementations (that we know of) have implemented the “perf” protocol we’ve defined.
Performance-First Design
As already mentioned above, performance has been a primary focus of our efforts. Since the very start of our work on QUIC, we’ve had both HTTP and SMB scenarios driving pretty much every design decision we’ve made. It comes down to the following: The design must be both performant for a single operation and highly parallelizable for many. For SMB, a few connections must be able to achieve extremely high throughput. On the other hand, HTTP needs to support tens of thousands of parallel connections/requests with very low latency.
This design initially led to significant improvements at the UDP layer. We added support for UDP send segmentation and receive coalescing. Together, these interfaces allow a user mode app to batch UDP payloads into large contiguous buffers that only need to traverse the networking stack once per batch, opposed to once per datagram. This greatly increased bulk throughput of UDP datagrams for user mode.
These design requirements have led to some significant complexity internal to MsQuic as well. The QUIC protocol and the UDP (and below) work are separated onto their own threads. In scenarios with a small number of connections, these threads generally spread to separate processors, allowing for higher throughput. In scenarios with a large number of connections, effectively saturating all the processors with work, we do additional work improves parallelization.
Those are just a few of the (bigger) impacts our performance-driven design has had on MsQuic architecture. This design process has affected every part of MsQuic from the API down to the platform abstraction layer.
Making Performance Testing Integral to CI
Claiming a performant design means nothing without data to back it up. Additionally, we found that occasional, mostly manual, performance testing led to even more issues. First off, to be able to make reasonable comparisons of performance results, we needed to reduce the number of factors that might affect the results. We found that having a manual process added a lot of variability to the results because of the significant setup and tool complexity. Removing the “middleman” was super important, but frequent testing has been even more important. If we only tested once a month, it was next to impossible to identify the cause of any regressions found in the latest results; let alone prevent them from happening in the first place. That inevitably led to a significant amount of wasted time trying to track down the problem. All the while, we had regressed performance for anyone using the code in the meantime.
For these reasons, we’ve invested significant resources into making performance testing a first-class citizen in our CI automation. We run the full performance suite of tests for every single PR, every commit to main, and for every release branch. If a pull request affects performance, we know before it’s even merged into main. If it regresses performance, it’s not merged. With this system in place, we have pretty much guaranteed performance in main will only go up. This has also allowed us to confidently take external contributions to the code without fear of any regressions.

Another significant part of this automation is generating our Performance Dashboard. Every run of our CI pipeline for commits to main generates a new data point and automatically updates the data on the dashboard. The main page is designed to give a quick look at the current state of the system and any recent changes. There are various other pages that can be used to drill down into the data.
Progress So Far
As indicated in the chart at the beginning, we’ve had lots of improvements in performance over the last year. One nice feature of the dashboard is the ability to click on a data point and get linked directly to the relevant Git commit used. This allows us to easily find what code change caused the impacted performance. Below is a list of just a few of the recent commits that had the biggest impact on single connection upload performance.
- d985d44 – Improves the flow control tuning logic
- 1f4bfd7 – Refactors the perf tool
- ec6a3c0 – Fix a kernel issue related to starving NIC packet buffers
- be57c4a – Refactors how we use RSS processor to schedule work
- 084d034 – Refactors OpenSSL crypto abstraction layer
- 9f10e02 – Switches to OpenSSL 1.1.1 branch instead of 3.0
- ee9fc96 – Adds GSO support to Linux data path abstraction
- a5e67c3 – Refactors UDP send logic to run on data path thread
Most of these changes came about from this simple process:
- Collect performance traces.
- Analyze traces for bottlenecks.
- Improve biggest bottleneck.
- Test for regressions.
- Repeat.
This is a constantly ongoing process to always improve performance. We’ve done considerable work to make parts of this process easier. For instance, we’ve created our own WPA plugin for analyzing MsQuic performance traces. We also continue to spend time stabilizing our existing performance so that we can better catch possible regressions going forward.
Future Work
We’ve done a lot of work so far and come a long way, but the push for improved performance is never ending. There’s always another bottleneck to improve/eliminate. There’s always a little better/faster way of doing things. There’s always more tooling that can be created to improve the overall process. We will continue to put effort into all these.
Going forward, we want to investigate additional hardware offloads and software optimization techniques. We want to build upon the work going on in the ecosystem and help to standardize these optimizations and integrate it them into the OS platform and then into MsQuic. Our hope is that we will make MsQuic the first choice for customer workloads by bringing the network performance benefits QUIC promises without having to make a trade-off with computational efficiency.
As always, for more info on MsQuic continue reading on GitHub.
by Contributed | Apr 14, 2021 | Dynamics 365, Microsoft 365, Technology
This article is contributed. See the original author and article here.
Successful fraud protection relies on a lot of information. The more insights you have into transactions and accounts, the better you will be able to detect suspicious activity. Microsoft Dynamics 365 Fraud Protection uses advanced AI models to bring together diverse sets of information into a single assessment score that indicates the overall risk of an event. Customers can create rules that threshold this score to make decisions in a manner that suits their risk appetite. However, sometimes customers also have the need to directly reason over raw attributes in their rules, for example, to detect business policy violations or to stop emerging fraud patterns specific to their business. In this preview, we are adding two features that will significantly improve the information available in the Dynamics 365 Fraud Protection rule engine: velocities and external calls.
Velocities use relationships and patterns between historical transactions to identify suspicious activity and help customers prevent loss from fraud. External calls let Dynamics 365 Fraud Protection customers ingest data from third-party information providers or from their in-house AI models. All these inputs may be needed by customers to make fully informed decisions on their events.
Identify potential fraud with velocities
How do you determine if a transaction is suspicious? In short, you have to look at the bigger picture. For example, there may be nothing suspicious about someone buying a single exercise bike online. However, if they buy fifteen exercise bikes over a short period of time, each with a different payment instrument, that might indicate possession of stolen payment information and malicious fraud. This is why monitoring the relationships and patterns between current and past transactions is essential in determining the riskiness of any given event.

Velocity detection allows customers to analyze the historical patterns of an individual or entity such as a credit card, IP address, or user email. Velocities already play a significant part in our AI-driven risk assessments, and now we are enabling our customers to define their own velocities that matter most to their business. With velocity checks, you can get the answers to questions such as: How much money has a user account spent in the last hour? How many distinct payment instruments have been used from this device in the last seven days? How many times has this user account attempted to login in the last five minutes? Customers can then utilize these historical patterns in real-time decision-making.
Some behaviors are more suspicious regardless of the business, such as hundreds of login attempts within a short interval from the same IP address. Other behaviors might be suspicious for one business, but not for another. It is not uncommon for a single customer to make two to six transactions a month at a grocery store. However, it might be more suspicious if a single customer makes two to six transactions in a month at a luxury car dealer. Velocities are an important tool for any fraud protection service, but their effectiveness depends on being able to customize it to your business.
Dynamics 365 Fraud Protection’s velocities provide customers the ability to fully customize their velocities, all the way from which attributes to monitor, to the timeframe they monitor them over, to what thresholds they want to set.

Velocities also allow customers to connect behaviors from different assessment events including account creations, account logins, and purchases. For example, a customer could block a user from logging in based on an inordinately large number of purchases made from the account in a short interval, or flag a purchase for review based on suspicious login attempts, since both velocities may be indicative of account compromise.
Make informed decisions in real-time with external calls
Until now, using the Dynamics 365 Fraud Protection’s rule engine customers could make real-time decisions based only on the data available within the product. This included data sent as part of the request payload, data uploaded in the form of lists, data generated by device fingerprinting, and risk assessment and bot detection scores produced by our AI models. However, sometimes customers may need additional signals from data sources outside the product to inform their decisions. Some customers may choose to partner with third-party information providers for additional data enrichment. Details such as address verification and phone reputation can help discover suspicious and fraudulent activity. Other customers may choose to utilize scores from their own in-house AI models tailored to their business. All these inputs may be important for customers to make fully informed decisions regarding their business.

Our external calls feature enables customers to bring in data from essentially any API endpoint on the web, ensuring they have full context and flexibility needed at the point of decision. This update continues to increase the power and scope of what can be done from within the Dynamics 365 Fraud Protection rules engine, allowing customers to utilize outside data when orchestrating their decisions.

Note that velocities and external calls are features made available in preview with reasonable consumption limits. In the future, we may bring these to general availability in an appropriate way.
Get started with Dynamics 365 Fraud Protection
Join the Dynamics 365 Fraud Protection Insider Program, to get an early view of upcoming features and discuss best practices to combat fraud.
Sign up for a free trial of Dynamics 365 Fraud Protection to try out these new features and check out the documentation for velocities and external calls, where you can learn how to create, use, and manage these new features.
Learn more about Dynamics 365 Fraud Protection capabilities including account protection, purchase protection, and loss prevention, and check out the e-book, “Protecting Customers, Revenue, and Reputation from Online Fraud.”
Finally, you can check out all the latest product updates for Microsoft Dynamics 365 and Microsoft Power Platform.
The post Customize your protection with new features in the Dynamics 365 Fraud Protection preview appeared first on Microsoft Dynamics 365 Blog.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
Earlier this year, we announced the preview of Always Encrypted with secure enclaves in Azure SQL Database – the feature designed to safeguard sensitive data from malware and unauthorized users by enabling rich confidential queries.

Royal Bank of Canada (RBC) is one of the customers who are already using Always Encrypted with secure enclaves. For details, see Microsoft Customer Story – RBC creates relevant personalized offers while protecting data privacy with Azure confidential computing.
For more information about Always Encrypted with secure enclaves in Azure SQL Database, see:
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
As a major move to the more secure SHA-2 algorithm, Microsoft will allow the Secure Hash Algorithm 1 (SHA-1) Trusted Root Certificate Authority to expire. Beginning May 9, 2021 at 4:00 PM Pacific Time, all major Microsoft processes and services—including TLS certificates, code signing and file hashing—will use the SHA-2 algorithm exclusively.
Why are we making this change?
The SHA-1 hash algorithm has become less secure over time because of the weaknesses found in the algorithm, increased processor performance, and the advent of cloud computing. Stronger alternatives such as the Secure Hash Algorithm 2 (SHA-2) are now strongly preferred as they do not experience the same issues. As a result, we changed the signing of Windows updates to use the more secure SHA-2 algorithm exclusively in 2019 and subsequently retired all Windows-signed SHA-1 content from the Microsoft Download Center on August 3, 2020.
What does this change mean?
The Microsoft SHA-1 Trusted Root Certificate Authority expiration will impact SHA-1 certificates chained to the Microsoft SHA-1 Trusted Root Certificate Authority only. Manually installed enterprise or self-signed SHA-1 certificates will not be impacted; however we strongly encourage your organization to move to SHA-2 if you have not done so already.
Keeping you protected and productive
We expect the SHA-1 certificate expiration to be uneventful. All major applications and services have been tested, and we have conducted a broad analysis of potential issues and mitigations. If you do encounter an issue after the SHA-1 retirement, please see Issues you might encounter when SHA-1 Trusted Root Certificate Authority expires. In addition, Microsoft Customer Service & Support teams are standing by and ready to support you.
by Contributed | Apr 14, 2021 | Technology
This article is contributed. See the original author and article here.
Policy-driven Governance is a cornerstone in Enterprise-scale Landing Zone (ESLZ!). It’s possible to codify corporate, industry or country specific governance requirements declaratively using Azure Policy. ESLZ provides 90+ custom policies which help in meeting most common corporate governance requirements with a single click.
Benefits of these 90+ custom policies is documented in detail.
Following table lists these policies and the governance requirements they help in enforcing.
Deny-Public-Endpoints-for-PaaS-Services Policy Initiative includes following policies which apply on specific Azure services.
- Deny-PublicEndpoint-CosmosDB
- Deny-PublicEndpoint-MariaDB
- Deny-PublicEndpoint-MySQL
- Deny-PublicEndpoint-PostgreSql
- Deny-PublicEndpoint-KeyVault
- Deny-PublicEndpoint-Sql
- Deny-PublicEndpoint-Storage
- Deny-PublicEndpoint-Aks
Deploy-Diag-LogAnalytics PolicySet helps capturing Logs and Metrics as shown below.
Policy Name |
Log Categories |
Metrics |
Deploy-Diagnostics-AA |
JobLogs JobStreams DscNodeStatus |
AllMetrics |
Deploy-Diagnostics-ACI |
|
AllMetrics |
Deploy-Diagnostics-ACR |
|
AllMetrics |
Deploy-Diagnostics-ActivityLog |
Administrative Security ServiceHealth Alert Recommendation Policy Autoscale ResourceHealth |
|
Deploy-Diagnostics-AKS |
kube-audit kube-apiserver kube-controller-manager kube-scheduler cluster-autoscaler |
AllMetrics |
Deploy-Diagnostics-AnalysisService |
Engine Service |
AllMetrics |
Deploy-Diagnostics-APIMgmt |
GatewayLogs |
Gateway Requests Capacity EventHub Events |
Deploy-Diagnostics-ApplicationGateway |
ApplicationGatewayAccessLog ApplicationGatewayPerformanceLog ApplicationGatewayFirewallLog |
AllMetrics |
Deploy-Diagnostics-Batch |
ServiceLog |
AllMetrics |
Deploy-Diagnostics-CDNEndpoints |
CoreAnalytics |
|
Deploy-Diagnostics-CognitiveServices |
Audit RequestResponse |
AllMetrics |
Deploy-Diagnostics-CosmosDB |
DataPlaneRequests MongoRequests QueryRuntimeStatistics |
Requests” |
Deploy-Diagnostics-DataFactory |
ActivityRuns PipelineRuns TriggerRuns |
AllMetrics |
Deploy-Diagnostics-DataLakeStore |
Audit Requests |
AllMetrics |
Deploy-Diagnostics-DLAnalytics |
Audit Requests |
AllMetrics |
Deploy-Diagnostics-EventGridSub |
|
AllMetrics |
Deploy-Diagnostics-EventGridTopic |
|
AllMetrics |
Deploy-Diagnostics-EventHub |
ArchiveLogs OperationalLogs AutoScaleLogs |
AllMetrics |
Deploy-Diagnostics-ExpressRoute |
PeeringRouteLog |
AllMetrics |
Deploy-Diagnostics-Firewall |
AzureFirewallApplicationRule AzureFirewallNetworkRule AzureFirewallDnsProxy |
AllMetrics |
Deploy-Diagnostics-HDInsight |
|
AllMetrics |
Deploy-Diagnostics-iotHub |
Connections DeviceTelemetry C2DCommands DeviceIdentityOperations FileUploadOperations Routes D2CTwinOperations C2DTwinOperations TwinQueries JobsOperations DirectMethods E2EDiagnostics Configurations |
AllMetrics |
Deploy-Diagnostics-KeyVault |
AuditEvent |
AllMetrics |
Deploy-Diagnostics-LoadBalancer |
LoadBalancerAlertEvent LoadBalancerProbeHealthStatus |
AllMetrics |
Deploy-Diagnostics-LogicAppsISE |
IntegrationAccountTrackingEvents |
|
Deploy-Diagnostics-LogicAppsWF |
WorkflowRuntime |
AllMetrics |
Deploy-Diagnostics-MlWorkspace |
AmlComputeClusterEvent AmlComputeClusterNodeEvent AmlComputeJobEvent AmlComputeCpuGpuUtilization AmlRunStatusChangedEvent |
Run Model Quota Resource |
Deploy-Diagnostics-MySQL |
MySqlSlowLogs |
AllMetrics |
Deploy-Diagnostics-NetworkSecurityGroups |
NetworkSecurityGroupEvent NetworkSecurityGroupRuleCounter |
|
Deploy-Diagnostics-NIC |
|
AllMetrics |
Deploy-Diagnostics-PostgreSQL |
PostgreSQLLogs |
AllMetrics |
Deploy-Diagnostics-PowerBIEmbedded |
Engine |
AllMetrics |
Deploy-Diagnostics-PublicIP |
DDoSProtectionNotifications DDoSMitigationFlowLogs DDoSMitigationReports |
AllMetrics |
Deploy-Diagnostics-RecoveryVault |
CoreAzureBackup AddonAzureBackupAlerts AddonAzureBackupJobs AddonAzureBackupPolicy AddonAzureBackupProtectedInstance AddonAzureBackupStorage |
|
Deploy-Diagnostics-RedisCache |
|
AllMetrics |
Deploy-Diagnostics-Relay |
|
AllMetrics |
Deploy-Diagnostics-SearchServices |
OperationLogs |
AllMetrics |
Deploy-Diagnostics-ServiceBus |
OperationalLogs |
AllMetrics |
Deploy-Diagnostics-SignalR |
|
AllMetrics |
Deploy-Diagnostics-SQLDBs |
SQLInsights AutomaticTuning QueryStoreRuntimeStatistics QueryStoreWaitStatistics Errors DatabaseWaitStatistics Timeouts Blocks Deadlocks SQLSecurityAuditEvents |
AllMetrics |
Deploy-Diagnostics-SQLElasticPools |
|
AllMetrics |
Deploy-Diagnostics-SQLMI |
ResourceUsageStats SQLSecurityAuditEvents |
|
Deploy-Diagnostics-StreamAnalytics |
Execution Authoring |
AllMetrics |
Deploy-Diagnostics-TimeSeriesInsights |
|
AllMetrics |
Deploy-Diagnostics-TrafficManager |
ProbeHealthStatusEvents |
AllMetrics |
Deploy-Diagnostics-VirtualNetwork |
VMProtectionAlerts |
AllMetrics |
Deploy-Diagnostics-VM |
|
AllMetrics |
Deploy-Diagnostics-VMSS |
|
AllMetrics |
Deploy-Diagnostics-VNetGW |
GatewayDiagnosticLog IKEDiagnosticLog P2SDiagnosticLog RouteDiagnosticLog RouteDiagnosticLog TunnelDiagnosticLog |
AllMetrics |
Deploy-Diagnostics-WebServerFarm |
|
AllMetrics |
Deploy-Diagnostics-Website |
|
AllMetrics |
PolicySet Deploy-DNSZoneGroup-For-*-PrivateEndpoint targets Azure services as shown below.
Policy Name |
Azure Service |
Deploy-DNSZoneGroup-For-Blob-PrivateEndpoint |
Azure Storage Blob |
Deploy-DNSZoneGroup-For-File-PrivateEndpoint
|
Azure Storage File |
Deploy-DNSZoneGroup-For-Queue-PrivateEndpoint
|
Azure Storage Queue |
Deploy-DNSZoneGroup-For-Table-PrivateEndpoint
|
Azure Storage Table |
Deploy-DNSZoneGroup-For-KeyVault-PrivateEndpoint
|
Azure KeyVault |
Deploy-DNSZoneGroup-For-Sql-PrivateEndpoint
|
Azure SQL Database |
Recent Comments