Released: General Availability of Microsoft.Data.SqlClient 3.0

This article is contributed. See the original author and article here.

We have released for general availability Microsoft.Data.SqlClient 3.0. This .NET Data Provider for SQL Server provides general connectivity to the database and supports all the latest SQL Server features for applications targeting .NET Framework, .NET Core, and .NET Standard.


 


To try out the new package, add a NuGet reference to Microsoft.Data.SqlClient in your application.


 


If you’ve been following our preview releases, you know we’ve been busy working to add features and improve stability and performance of the Microsoft.Data.SqlClient library.


 


Some of the highlights of new features in 3.0 over the 2.1 release of Microsoft.Data.SqlClient include:



 


There are a few minor breaking changes in 3.0 over previous releases.



  • .NET Framework 4.6.1 is the new minimum .NET Framework version supported

  • The User Id connection property now requires Client Id instead of Object Id for User-Assigned Managed Identity when using the Active Directory Managed Identity authentication option

  • SqlDataReader now returns a DBNull value instead of an empty byte[] for RowVersion/Timestamp values. Legacy behavior can be enabled by setting an AppContext switch.


 


For the full list of added features, fixes, and changes in Microsoft.Data.SqlClient 3.0, please see the Release Notes.


 


Again, to try out the new package, add a NuGet reference to Microsoft.Data.SqlClient in your application. If you encounter any issues or have any feedback, head over to the SqlClient GitHub repository and submit an issue.


 


David Engel


 


 

Getting Started with Azure Remote Rendering

Getting Started with Azure Remote Rendering

This article is contributed. See the original author and article here.

unity.png


 


 


 


Mixed reality merges the digital and physical worlds. A key piece of the digital world is visualizing 3D models. Being able to view and interact with valuable 3D assets in mixed reality is improving workflows across many industries.


  


Azure Remote Rendering allows you to view these 3D models in mixed reality without the limitation of edge compute.


What is Azure Remote Rendering 


Remote Rendering is a Mixed Reality Azure service that enables you to render high quality, interactive 3D content in Azure and stream it to devices like HoloLens 2 in real time. This lets you experience your 3D content in mixed reality with full detail, going beyond the limits of low powered devices. We offer an intuitive SDK backed by a powerful cloud service that makes it easy for you to integrate into your existing apps or any new apps.


 


Why use Azure Remote Rendering


As industries move to take advantage of mixed reality, being able to view complex 3D models unlocks a ton of business value. While HoloLens 2 is an incredible, untethered computer, it’s still limited in its processing power. To work around the limitation, designers and developers have traditionally resorted to a process called decimation where detail is removed from a model to enable visualization on mixed reality headsets. Decimation not only simplifies the model but can also take a large amount of time.  


With Remote Rendering, you can skip the decimation process and view your complex 3D models in HoloLens 2 without any loss of detail.


 


Azure Remote Rendering Use Cases 


Remote Rendering can bring value across many industries that require complex 3D models. Two use cases I want to highlight are 3D design review and layout visualization.


 


3D Design Review 


Whether you are building a car engine or designing a new skyscraper, the ability to view the model in 3D and in mixed reality allows you to see how the product fits together. In addition to the benefits of viewing your product in 3D and spotting any issues, using Remote Rendering can create common understanding and speed up decision making, without having to go through a time intensive decimation process.


 


Layout Visualization


Imagine you are designing a factory line for a new product. With Remote Rendering and HoloLens 2, you can view the entire factory line in the real space to easily spot any errors, misalignments or missing pieces, thereby saving valuable time and avoiding costly rework.


  


Learn Module


We created a Microsoft Learn module to help you get started with Remote Rendering! In this module, we explore foundational concepts for Remote Rendering and take you through the process of creating the Azure resource followed by rendering a model in Unity. No device is required to complete the module; however, you’re welcome to deploy to a HoloLens to try out the experience in your real-world environment.


 


Check-out the video preview and visit aka.ms/learn-arr to get started!

How Mashreq bank is using React Native for Windows to bring new digital experiences to Windows

How Mashreq bank is using React Native for Windows to bring new digital experiences to Windows

This article is contributed. See the original author and article here.

MashreqLogo.png


 


Introduction


Mashreq is the oldest regional bank based in UAE, with a strong presence in most GCC countries and a leading international network with offices in Asia, Europe, and United States. Based in Dubai, they have 15 domestic branches and 11 international ones, with more than 2,000 Windows devices deployed across all of them. Mashreq is a leader in digital transformation in the banking sector and, in 2019, they started digital transformation journey with Microsoft 365 and Dynamics 365 and switched later to Surface Pro device for their frontline employees.


 


Mashreq bank developed an application Universal Banker App (UB App) with React Native to empower frontline employees working at branches to serve customers across a broad range of inquiries and journeys. The application helped them to increase the proximity with the customer, improving the customer experience and reducing the service time. After the success in one Business Unit, they decided to deploy the tool to all other business group, however they faced different challenges:


 



  1. UB app was built for Android, while the other business group were using Windows PCs and Microsoft Surfaces.

  2. UB App was optimized for touch input, which is best for mobile devices, while other business group often work with a mouse & keyboard setup.

  3. Mashreq development team had a limited experience with Windows native development.


Thanks to React Native for Windows, Mashreq was able to address all these challenges on a very short time and developed Mashreq FACE App for Windows platform.


 


FaceLogin.png


 


Reusing the existing investments to deliver a first-class experience on Windows


React Native for Windows has enabled the development team to reuse the assets they build on React Native for the Android application. “React Native for Windows allowed us to extend the same experience of the original Android application with maximum code reusability” said Anubhav Jain, Digital Product Lead in Mashreq. Thanks to the modularity provided by React Native, the development team was able to build a native Windows experience tailored for the other business groups by utilizing the existing components they had built for the Android version.


 


Thanks to the investments done by Microsoft on React Native, this is no longer a mobile-only cross-platform framework, but it’s a great solution to support cross-platform scenarios across desktop as well. This has enabled Mashreq to provide a first-class experience to the employees who are interacting with the application on Windows, whether if they are using PCs optimized for mouse and keyboard or departments who have adopted Microsoft Surface to provide a great touch experience.


 


Dashboard.png


 


React Native for Windows has enabled the development team at Mashreq to develop with their existing workforce and allowed them to iterate faster on the project targeting multiple platforms.


 


React Native for Windows has enabled Mashreq not only to reuse their existing skills and code to bring their application to Windows, but also to enhance it by tailoring it with specific Windows features.


 


As part of their digital transformation journey, Mashreq has also deployed Microsoft Teams as official communication app internally and for external customers. The development team has integrated Microsoft Teams in FACE App on Windows, by enabling employees to call customers directly with just one click by using Microsoft Teams communication capabilities.


 


TeamsCalling.png


 


Since React Native for Windows generates a native Windows application, it empowers developers to support a wide range of Windows-only scenarios, like interacting with specialized Hardware. This feature has enabled Mashreq to implement specialized biometric authentication in FACE. By connecting to various types of biometric devices (such as card readers, fingerprint readers, etc.)-Mashreq can validate customer information simply by scanning the ID card provided by the government and do the fingerprint scanning. By working closely with Microsoft, Mashreq has been able to integrate inside the Windows version of FACE application the components required to enable the biometric authentication process while complying with government regulations. This integration enables to securely process financial & non-financial transactions.


 


Conclusion


Thanks to React Native for Windows, Mashreq will be able to seamlessly evolve Apps on Android and Windows while, at the same time, continue their digital transformation journey gradually adopting more Teams and Windows specific capabilities.


 


 


Deploying Azure Digital Twin at Scale

Deploying Azure Digital Twin at Scale

This article is contributed. See the original author and article here.

I am part of the team that is working on Digital transformation of Microsoft campuses across the globe. The idea is to provide experiences like Room Occupancy, People count, Thermal comfort, etc. to all our employees. To power these experiences, we deal with real estate like Buildings, IoT Devices and Sensors installed in the buildings and of course Employees. Our system ingests data from all kinds of IoT Devices and Sensors, processes and stores them in Azure Digital Twin (or ADT in short).


 


Since we are dealing with Microsoft buildings across the globe which results in a lot of data, customers often ask us about using ADT at scale – the challenges we faced and how we overcame them.


 


In this article, we will be sharing how we worked with ADT at scale. We will touch upon some of the areas that we focused on:



  • Our approach to

    • handle Model, Twins and Relationships Limits that impact Schema and Data.

    • handle transaction limits like DT API, DT Query API, etc. that deal with continuous workload



  • Scale Testing ADT under load.

  • Design Optimizations.

  • Monitoring and Alerting for a running ADT instance.


So, let’s begin our journey at scale:smile:.


 


ADT Limits


Before we get into understanding how to deal with scale, let’s understand the limits that ADT puts. ADT has various limits categorized under Functional Limits and Rate Limits.


 


Functional limits refer to the limits on the number of objects that can be created like Models, Twins, and Relationships to name a few. These primarily deal with the schema and amount of data that can be created within ADT.


 


Rate limits refer to the limits that are put on ADT transactions. These are the transactional limitations that every developer should be aware of, as the code we write will directly lead to consumption of these rates.


 


Many of the limits defined by ADT are “Adjustable”, which means ADT team is open to changing these limits based on your requirements via a support ticket.


 


For further details and a full list of limitations, please refer to Service limits – Azure Digital Twins | Microsoft Docs.


 


Functional Limits (Model, Twin and Relationships)


The first step in dealing with scale is to figure out if ADT will be able to hold the entire dataset that we want it to or not. This starts with the schema first followed by data.


 


Schema


This first step, as with any system, is to define the schema, which is the structure needed to hold the data. Schema in case of ADT is defined in terms of Models. Once Models were defined, it was straightforward to see that the number of Models were within the ADT limits.


 


While the initial set of Models may be within the limit, it is also important to ensure that there is enough capacity left for future enhancements like new models or different versions of existing models.


 


One learning that we had during this exercise was to ensure regular cleanup of old Model or old versions after they are decommissioned to free up capacity.


 


Data


Once the schema was taken care of, came the check for amount of data that we wanted to store. For this, we looked at the number of Twins and Relationships needed for each Twin. There are limits for incoming and outgoing relationships, so we needed to assess their impact as well.


 


During the data check, we ran into a challenge where for a twin, we had lot of incoming relationships which was beyond the allowed ADT limits. This meant that we had to go back and modify the original schema. We restructured our schema by removing the incoming relationship and instead created a property to flatten the schema.


 


To elaborate more on our scenario – we were trying to create a relationship to link Employees with a Building where they sit and work. For some bigger buildings, the number of employees in that building, were beyond the supported ADT limits for incoming relationships, hence we added a direct property in Employee model to refer to Building directly instead of relationship.


 


With that out of our way, we moved on to checking the number of Twins. Keep in mind that Twin limit applies to all kind of twins including incoming and outgoing relationships. Looking at number of twins that we will have in our system was easier as we knew the number of buildings and other related data that would flow into the system.


 


As in the case of Models, we also looked at our future growth to ensure we have enough buffer to cater to new buildings for future.  


 


Pro Tip: We wrote a tool to simulate creation of twins and relationship as per our requirements, to test out the limits. The tool was also of great help in benchmarking our needs. Don’t forget to cleanup:smile:.


 


Rate Limits (Twin APIs / Query APIs / Query Units /…)


Now that we know ADT can handle the data we are going to store, the next step was to check the rate limits. Most ADT rate limits are handled per second e.g., Twin API operations RPS limit, Query Units consumed per seconds, etc.


 


Before we get into details about Rate limits, here’s a simplistic view of our main sensor reading processing pipeline, which is where the bulk of processing happens and thus contributes to the load. The main component that does the processing here is implemented as Azure Function that sits between IoT Hub and ADT and works on ingesting the sensor readings to ADT.


 


Processing Pipeline - Original.png


 


 


This is pretty similar to the approach suggested by ADT team for ingesting IoT Hub telemetry. Please refer to Ingest telemetry from IoT Hub – Azure Digital Twins | Microsoft Docs for more information on the approach and few implementation details regarding this.


 


To identify the load which our Azure Function will be processing continuously, we looked at the sensor readings that will be ingested across the buildings. We also looked at the frequency at which a sensor sends a reading thus resulting into a specific load for the hour / day.


 


With this we identified our daily load which resulted into identifying a per second load or RPS as we call it. We were expected to process around 100 sensor readings per second. Our load was mostly consistent throughout as we process data from buildings all over the world.














Load (RPS)



100



Load (per day)



~8.5 million readings



 


Once the load data was available, we needed to convert load into the total number of ADT operations. So, we identified the number of ADT operations we will perform for every sensor reading that is ingested. For each reading, we identified below items:



  • Operation type i.e., Twin API vs Query API operation

  • Number of ADT operations required to process every reading.

  • Query charge (or query unit consumption) for Query operations. This is available as part of the response header in the ADT client method.



 


This is the table we got after doing all the stuff mentioned above.


















Twin API operation per reading



2



Query API operation per reading



2



Avg Query Unit Consumed per query



30



 


Multiplying the above numbers by the load gave us the expected ADT operations per second.


 


Apart from the sensor reading processing, we also had few other components like background jobs, APIs for querying data, etc. We also added the ADT usage from these components on top of our regular processing number calculations above, to get final numbers.


 


Armed with these numbers, we put up a calculation to get the actual ADT consumption we are going to hit when we go live. Since these numbers were within the ADT Rate limits, we were good.


 


Again, as with Models and Twins, we must ensure some buffer is there otherwise future growth will be restricted.


 


Additionally, when a service limit is reached, ADT will throttle the requests. For more suggestions on working with limits by ADT team, please refer to Service limits – Azure Digital Twins | Microsoft Docs


 


Scale


For future scale requirements, the first thing we did was to figure out our future load projections. We were expected to grow up to twice the current rate in next 2 years. So, we just doubled up all the numbers that we got above and got the future scale needs as well.


 


ADT team provides a template (sample below) that helps in organizing all this information at one place.


Scale template - Original.png


 


 


 


Once the load numbers and future projections were available, we worked with ADT team on the adjustable limits / getting the green light for the Scale numbers.


 


Performance Test


Based on the above numbers and after getting the green light from ADT team – to validate & verify the scale and ADT limits, we did performance test for our system. We used AKS based clusters to simulate ingestion of sensor readings, while also running all our other background jobs at the same time.


 


We ran multiple rounds of perf runs for different loads like X and 2X and gathered metrics around ADT performance. We also ran some endurance tests, where we ran varying loads continuously for a day or two to measure the performance.


 


Design Optimization (Cache / Edge Processing)


Typically, sensors send lot of unchanged readings. For example, a motion will come as false for a long duration and once it changes to true it will most likely stick to being true for some time before going back to false. As such, we don’t need to process each reading and such “no change” readings can be filtered out.


 


With this principle in mind, we added a Cache component in our processing pipeline which helped in reducing the load on ADT. Using Cache, we were able to reduce our ADT operations by around 50%. This helped us achieve a support for higher load with added advantage of faster processing.


 


Another change we did to optimize our sensor traffic was to add edge processing. We introduced an Edge module which acted as the Gateway between Device where the readings are generated and IoT Hub which acts as a storage.


 


The Gateway module processes the data closer to the actual physical devices and helped us in filtering out certain readings based on the rules defined e.g., filtering out health readings from telemetry readings. We also used this module to enrich our sensor readings being sent to IoT Hub which helped in reducing overall processing time.


 


Pro-active Monitoring and Alerts


All said and done, we have tested everything at scale and things look good. But does that mean we will never run into a problem or never reach a limit? Answer is “No”. Since there is no guarantee, we need to prepare ourselves for such eventuality.


 


ADT provides various Out of the Box (OOB) metrics that helps in tracking the Twin Count, RPS for Twin API or Query API operation. We can always write our own code to track more metrics if required, but in our case, we didn’t need to, as the OOB metrics were fulfilling our requirements.  


 


To proactively monitor the system behavior, we created a dashboard in our Application Insights for monitoring ADT where we added widgets to track the consumptions for each of the important ADT limits. Here’s how the widget we have look like:





























Query API Operations RPS



 



Twin API Operations RPS



Min



Avg



Max



 



Min



Avg



Max



50



80



400



 



100



200



500



 





















Twin and Model Count



Model



Model Consumption %



Twin Count



Twin Consumption %



100



10%



80 K



40%



 


For being notified of any discrepancy in system – we have alerts configured to raise a flag in case we consistently hit the ADT limits over a period.  As an example, we have alerts defined at various levels for Twin count say raise warning at 80%, critical error at 95% capacity, for folks to act.


 


An example of how it helped us – once due to some bug (or was it a feature J), a piece of code kept on adding unwanted Twins overnight. In morning we started getting alerts about Twin capacity crossing 80% limit and thus helped us getting notified of the issue and eventually fixing and cleaning up.


 


Summary


I would leave you with a simple summary – while dealing with ADT at Scale, work within the various limits by ADT and design your system keeping them in mind. Plan to test / performance test your system to catch issues early and incorporate changes as required. Setup regular monitoring and alerting, so that you can track system behavior regularly.


 


Last but not the least, keep in mind dealing with Scale is not a one-time thing, but rather a continuous work where you need to be constantly evolving, testing, and optimizing your system as the system itself evolves and grows and you add more features to it.


 

Join us at SapphireNOW for the latest updates on running SAP solutions on Microsoft Azure

Join us at SapphireNOW for the latest updates on running SAP solutions on Microsoft Azure

This article is contributed. See the original author and article here.

We are excited to participate in SAP SapphireNOW this year building on the momentum we have created with our expanded partnership with SAP announced earlier this year with new Microsoft Azure and Microsoft Teams integrations. At Microsoft Ignite last year, we shared a migration framework to help customers achieve a secure, reliable, and simplified path to the cloud. The expanded partnership with SAP and the migration framework are key components of our broader vision to build the best platform for mission-critical SAP applications for both customers and partners. As a Gold sponsor at SapphireNOW, we are participating in several sessions, including one in which Matt Ordish covers how customers can unlock value from their SAP data on Azure, and another in which Scott Guthrie joins SAP’s Thomas Saueressig in the Supply Chain keynote to discuss why supply chain resilience and agility are paramount to ensuring continual business operations and customer satisfaction.


 


Our platform continues to run some of the biggest SAP landscapes in the cloud across industries. Achmea, a leading insurer based in the Netherlands, is building a cloud strategy that aims to deliver flexibility, scalability, and speed across the business, with less intensive lifecycle management requirements for its IT team. With close collaboration between Microsoft and SUSE, Achmea has successfully moved some of its biggest SAP systems to Azure. Walgreens recently completed the 9,100 store roll out of their SAP S/4 Hana Retail system in the United States. The new solution enables Walgreens’ ambition for a digital and enhanced customer experience and replaces the legacy Retail, Supply Chain and Business Services back-end of the business with modern solutions and business processes.


 


 SAP on Azure Migration Framework


 


Migration Framework High Res.png


 


We have continued our strong pace of innovation across our key product areas building on the updates we announced in our SAP TechEd blog last year. Across the migration framework, we have several new product updates to further simplify running SAP solutions on Microsoft Azure. Below is a summary of some of the key product updates:



  1. Lower TCO and improve dynamic handling of bursts in workloads We have added new SAP-certified compute and storage options and capabilities to help you right-size your infrastructure during the PREPARE phase of your migration.

  2. Cloud-native solutions for efficient and agile operations during the MIGRATE and RUN phases The new release of our deployment automation framework lays the foundation for repeatable SAP software install, OS and other configurations. We have added key improvements to Azure Monitor for SAP Solutions (AMS) and Azure Backup for HANA.

  3. Deeper integration with our partners to simplify customer journey to Azure: We are building our platform from the ground up for better integration with our managed services providers (MSP) and ISV partners. We are proud to share updates on the work we have done with Scheer GmbH and our preferred partnership announcements with SNP and Protera.


Let’s walk through these updates in more detail.


 


New SAP-certified Msv2 Medium Memory VM sizes will deliver a 20% increase in CPU performance, increased flexibility with local disks, and a new intermediate scale up-option


In response to customer feedback, we announced the Preview of new M-series VMs for memory-optimized workloads at Ignite. Today we are happy to announce the GA of these VMs with SAP certification. These new Msv2 medium memory VMs:



  • Run on Intel® Xeon® Platinum 8280 (Cascade Lake) processor offering a 20% performance increase compared to the previous version.

  • Are available in both disk and diskless offerings allowing customers the flexibility to choose the option that best meets their workload needs.

  • Provide new isolated VM sizes with more CPU and memory that go up to 192 vCPU with 4TiB of memory, so customers have an intermediate option to scale up before going to a 416 vCPU VM size.


For more details on specs, regional availability, and pricing, read the blog announcement.


 


Lower TCO, consolidate complex HA/DR landscapes, and achieve faster database restart times with SAP-certified SAP HANA Large Instances (HLI) with Intel Optane


We are excited to announce 12 new SAP certifications of SAP HLI with Intel Optane following the recent record benchmark on the massive S896om instance. SAP HLI with Intel Optane helps customers reduce costs, simplify complex HA/DR landscapes, and achieve 22x faster database restart times, significantly reducing planned and unplanned downtimes. These new certified instances enable customers to:



  • Reduce the cost of the SAP HLI Optane instances by 15%

  • Take advantage of the SAP HLI Optane high memory density offerings for OLTP and OLAP workloads to reduce platform complexity for scale-up and scale-out scenarios.

  • The SAP benchmark on 16 socket 24TB S896om HLI instance, is the first of its kind on a hyperscaler platform and also the highest recorded for Optane system. This is a testament to our commitment to deliver a high degree of performant, reliable, and innovative infrastructure for customers running mission-critical SAP workloads in Azure.


Please see, SAP HANA hardware directory for recent HLI certified Optane instances in Azure.


 


New Azure storage capabilities in public preview to simplify handling dynamic workloads


We have added several capabilities to Azure premium storage to improve operations and offer customers more flexibility in handling the dynamic nature of their workloads.



  1. For phases of increased I/O workload, we released the possibility of changing the premium storage performance tier of a disk last year. Building on that, we now have a public preview to change the performance tier for a disk online to a higher tier. This addresses scenarios in which you need higher IOPS or I/O throughput due to time-consuming operational tasks or seasonal workloads.

  2. For short I/O workload bursts we released public preview of on-demand bursting for premium storage disks larger than 5132GiB. The on-demand bursting allows bursting of up to 30K IOPS and 1GB/sec throughput per disk. This addresses short-time bursts of I/O workload in regular or irregular intervals and avoids short-term throttling of I/O activity.

  3. For several Azure VM families that are SAP-certified, we released VM-level disk bursting. In smaller VM sizes such as the ones used in the SAP application layer, I/O peaks can easily reach the VM quota limits assigned for storage traffic. With the VM-level disk bursting functionality, VMs can handle such peaks without destabilizing the workload.

  4. Price reduction of Azure Ultra disk : As of April 1st, 2021, we reduced the price for the I/O throughput component of the Azure Ultra disk by 65%. This price reduction leads to better economics, especially with SAP workload triggered I/O load on the DBMS backend.


Accelerate deployment with the SAP on Azure Deployment Automation Framework


To help our customers effectively deploy SAP solutions on Azure, we offer the SAP Automation deployment framework, which includes Infrastructure as Code, and Configuration as Code, as open-source that is built on Terraform and Ansible. We are excited to have released the next version of the framework. In this release, we introduce capabilities to simplify the initial setup required to begin the deployment process for SAP systems. Other capabilities in this release include support for subnets as part of the Workload Zone, an Azure Firewall for outbound internet connectivity in isolated environments, and support for encryption using customer-managed keys.  A detailed list of capabilities is in the Release Note.


 A new feature becoming available in Private Preview at the end of June is the SAP Installation Framework. This feature will provide customers with a repeatable and consistent process to install an SAP product. To register for the Private Preview, please fill out the SAP Deployment Automation – private preview request form.


 


Seamlessly monitor SAP NetWeaver metrics with Azure Monitor for SAP Solutions (AMS)


Customers can now use AMS to monitor SAP NetWeaver systems in the Azure portal. In addition, customers can use the existing capabilities in AMS to monitor SAP HANA, Microsoft SQL Server, High-availability (pacemaker) Clusters, and Linux Operating System. The SAP BASIS and infrastructure teams within an organization can use AMS to collect, visualize and perform end-to-end SAP technical monitoring in one place within the Azure portal. Customers can get started with AMS without any license fee (consumption charges are applicable) in the following regions: US East, US East 2, US West 2, North Europe, and West Europe. 


 


Speed up backups with the latest updates to Azure Backup for SAP HANA


Now SAP HANA users can protect larger SAP HANA databases with a faster backup solution (up to 8 TB of full backup size and ~420 MBps speeds during backup and restore). Support for incremental backup is now generally available, enabling customers to assign cost-optimal backup policies such as weekly and daily incrementals. Customers can also secure their backups with HANA native keys or customer-managed keys in key vault for backup data at rest, and protect against regional outages by restoring to secondary paired region on-demand using the cross-region restore functionality. Customers can also monitor their entire Azure backup estate using Backup center, which is now generally available.


 


Strategic collaborations and product integrations with our partners will accelerate moving SAP solutions to Microsoft Azure


As we invest in building solutions that simplify the migration of SAP solutions to Azure, we closely collaborate with our global partners right from the design stage. One such design partner is Scheer GmbH. Scheer GmbH specializes in providing SAP consulting and managed services for customers in the EU region and has been one of our earliest design partners in the development process for products such as SAP automation deployment framework (available as open-source), Azure Monitor for SAP Solutions, and SAP HANA Backup solutions. Robert Mueller, Managing Director, Cloud Managed Services group Scheer says “With Microsoft Azure, we can offer our SAP customers the best and most secure cloud platform for their digital transformation ever. Through our partnership, we were able to incorporate our years of experience from operating SAP systems as well as the requirements of our customers into the Azure services. The Microsoft engineering team is an excellent and reliable partner who understands the needs of MSPs as well as their customers on an equal footing and implements them in an agile fashion. We look forward to continuing this partnership in the future and learn more from each other to create the best cloud to run SAP and business processes!”


Customers with complex SAP migration requirements can leverage our preferred partnership with SNP. SNPs BLUEFIELD™ approach helps customers migrate and upgrade to S/4 HANA in a single, non-disruptive go live. SNP’s Cloud Move for Azure provides rapid assessments, Azure target solutioning, and pricing. SNP’s CrystalBridge® offers automated migration capabilities in a cost-effective and secure fashion. Customers can also take advantage of data transformation capabilities of CrystalBridge® for selective data migration, mergers, and acquisitions.


We are happy to announce that we have also established a preferred partnership with Protera. Protera’s FlexBridge® helps customers accelerate SAP migrations to Azure by providing automated assessments, intelligent provisioning, and automated migrations. FlexBridge® also offers SAP management solutions on Azure providing a dashboard of operational metrics, service desk integration, cost analysis, and SAP maintenance optimization such as patching, system copies, and environment snoozing.


 


In addition to these updates, stay tuned for several exciting product announcements in the next few months. As we wrap this year, I also wanted to share why thousands of customers, including SAP, trust Azure to run their SAP solutions. We look forward to seeing you virtually at our sessions at SapphireNOW.