by Contributed | May 5, 2021 | Technology
This article is contributed. See the original author and article here.
Node may go down for several reasons, please find the probable causes for Nodes going down in Service Fabric Cluster.
Scenario#1:
Check the Virtual Machine associated with the Node exists or Deleted or Deallocated.
Azure Portal-> VMSS Resource -> Instances

If Virtual machine doesn’t exist, then one must perform either of below to Remove node state from Service Fabric cluster.
From SFX:
- Go to the service fabric explorer of the cluster.
- Check the Advanced mode setting check box on the cluster:

- Then click on Ellipsis (…) of the down nodes to have the “Remove node state” options and click on it. This should remove node state from the cluster.
From PS Command:
PS cmd: Remove-ServiceFabricNodeState -NodeName _node_5 -Force
Reference: https://docs.microsoft.com/en-us/powershell/module/servicefabric/remove-servicefabricnodestate?view=azureservicefabricps
Scenario#2:
Check if Virtual machine associate with the node is healthy in VMSS.
Go to Azure Portal-> VMSS Resource -> Instances -> Click on the Instance -> Properties

If Virtual Machine Guest Agent is “Not Ready” then reach out to Azure VM Team for the RCA.
Possible Mitigation:
- Restart the Virtual machine from VMSS blade.
- Re-image the Virtual Machine.
Scenario#3:
Check the performance of the Virtual Machine-like CPU and Memory.

If the CPU or Memory is High, then Fabric related process will not be able to establish any instances/start the instances causing the node to go down.
Mitigation:
- Check which process is consuming high CPU/Memory from the Task Manager to investigate the root cause and fix the issue permanently.
Collect the dumps using below tool to determine the root cause:
DebugDiag:
Download Debug Diagnostic Tool v2 Update 3 from Official Microsoft Download Center
(or) Procdump:
ProcDump – Windows Sysinternals | Microsoft Docs
- Restart the Virtual machine from VMSS blade.
Scenario#4:
Check the Disk usage of the Virtual Machine, no space is the disk could lead to Node down issues.
For disk space related issues, we recommend to use ‘windirstat’ tool mentioned in the article: https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Cluster/Out%20of%20Diskspace.md to understand which folders are consuming more space.
Mitigation:
Free up the space to bring the Node Up.
by Contributed | May 5, 2021 | Technology
This article is contributed. See the original author and article here.
We are pleased to announce the launch of the new Visio start page on Office.com, providing Visio users with a familiar start experience that is similar to other Office 365 apps and powerful Office features, like improved file discovery, improved collaboration features, and better template categorization.
From the new landing page, you will experience the benefits of the Office.com ecosystem combined with unique features of the existing Visio landing page. Below are additional details on the latest enhancements:
- Improved collaboration: The new design will allow you to see recommended files with actions from your teammates like, “Anne edited this Thursday at 9:10 pm.” You can easily open files that have been recently edited, accessed, and reviewed by your colleagues and quickly pick up where your colleagues left off.

- Better discovery of existing files: With the addition of My recent and Shared files, you can access your frequently used Visio files easily. There is also an option to add your files to the Favorites section for quick access by clicking on the star icon adjacent to the file.

- Better categorization of templates and sample diagrams: The new experience provides a set of templates at the top of the start page to quickly create new Visio diagrams.

To access the full set of templates and sample diagrams, click on More templates, which will direct you to the “More templates” page. Here, you will see the vast repertoire of Visio’s templates and sample diagrams. Navigate to the desired template and click on the Create button to design your new Visio diagram quickly. design your new Visio diagram quickly.

- New search experience: The new search experience will enable you to search quickly through Visio’s rich library of templates and sample diagrams using relevant search terms to help facilitate quicker file creation.

- Overall performance improvements: With the new landing page, you will also experience vastly improved app performance when opening existing or creating new Visio files, reduced time to create new files and so on.
How to access the new landing page
If you have a Visio Plan 1 or Visio Plan 2 subscription, you can access the new landing page via any of the below entry points:
- Click on the waffle menu in the top left. Then, click on All apps and search for “Visio.”

- Search for “Visio” in the universal search box at the top of the page and click on the Visio icon under the Apps header of the search results dropdown.

- Click on the All apps icon in the left navigation bar. Then, click on the Visio app tile under the Office 365 tab.

This experience will be rolling out gradually to our users , so stay tuned to experience the new start page soon!
New announcements are coming your way shortly, so keep checking the Visio Tech Community blog and please continue to send us your product feedback and ideas through UserVoice.
by Contributed | May 4, 2021 | Technology
This article is contributed. See the original author and article here.
What’s Up with Markdown?

Perhaps you’ve noticed a technology called Markdown that’s been showing up in a lot of web sites and apps lately. This article will explain Markdown and help you get started reading and writing it.
Markdown is a simple way to format text using ordinary punctuation marks, and it’s very useful in Microsoft 365. For example, Microsoft Teams supports markdown formatting in chat messages and SharePoint has a Markdown web part. Adaptive Cards support Markdown as well, as do Power Automate approvals. For the bot builders among us, Bot Composer language generation and QnA Maker both support markdown as well. And what’s at the top level of nearly every Github repo? You guessed it, a markdown file called README.md.
Designed to be intuitive
Imagine you’re texting someone and all you have to work with is letters, numbers, and a few punctuation marks. If you want to get their attention, you might use **asterisks**, right? If you’ve ever done that, then you were already using Markdown! Double asterisks make the text bold.
Now imagine you’re replying to an email and want to quote what someone said earlier in the thread. Many people use a little greater-than sign like this:
Parker said,
> Sharing is caring
Guess what, that’s Markdown too! When it’s displayed, it looks like this:
Parker said,
Sharing is caring
Did you ever make a little table with just text characters, like this?
Alpha | Beta | Gamma
——|——|——
1 | 2 | 3
If so, you already know how to make a table in Markdown!
Markdown was designed to be intuitive. Where possible, it uses the formatting clues people type naturally. So you can type something _in italics_ on the screen and it actually appears in italics.
In all cases you’re starting with plain text – the stuff that comes out of your keyboard and is edited with Notepad or Visual Studio Code – into something richer. (Spoiler alert: it’s HTML.)
What about emojis?
Markdown neither helps nor blocks emojis, they’re just characters. If your application can handle emojis, you can certainly include them in your markdown.
Commonly used Markdown
Markdown isn’t a formal standard, and a lot of variations have emerged. It all started at Daring Fireball; most implementations are faithful to the original but many have added their own features. For example, the SharePoint Markdown Web Part uses the “Marked” syntax; if you’re creating a README.md file for use in Github, you’ll want to use Github Flavored Markdown (GFM).
This article will stick to the most commonly used features that are likely to be widely supported. Each section will show an example of some markdown and then the finished rendering (which, again, may vary depending on what application you’re using).
Each of the following sections shows an example of some simple Markdown, followed by the formatted result.
1. Emphasizing Text
Markdown:
You can surround text with *single asterisks* or _single underscores_ to emphasize it a little bit;
this usually formatted using italics.
You can surround text with **double asterisks** or __double underscores__ to emphasize it more strongly;
this is usually formatted using bold text.
Result:
You can surround text with single asterisks or single underscores to emphasize it a little bit; this usually formatted using italics.
You can surround text with double asterisks or double underscores to emphasize it more strongly; this is usually formatted using bold text.
2. Headings
You can make headings using by putting several = (for a level 1 heading) or – signs (for a level 2 heading) in the line below your heading text.
Markdown:
Result:
My Heading
You can also make headings with one or more hash marks in column 1. The number of hash marks controls the level of the heading.
Markdown:
# First level heading
## Second level heading
### Third level heading
etc.
Result:
First level heading
Second level heading
Third level heading
etc.
3. Hyperlinks
Markdown:
To make a hyperlink, surround the text in square brackets
immediately followed by the URL in parenthesis (with no space in
between!) For example:
[Microsoft](https://www.microsoft.com).
Result:
To make a hyperlink, surround the text in square brackets immediately followed by the URL in parenthesis (with no space in between!) For example: Microsoft.
4. Images
Images use almost the same syntax as hyperlinks except they begin with an exclamation point. In this case the “alt” text is in square brackets and the image URL is in parenthesis, with no spaces in between.
Markdown:

Result:
In case you were wondering, you can combine this with the hyperlink like this:
Markdown:
[](http://pnp.github.io)
Result:

5. Paragraphs and line breaks
Markdown:
Markdown will
automatically
remove
single line breaks.
Two line breaks start a new paragraph.
Result:
Markdown will automatically remove single line breaks.
Two line breaks start a new paragraph.
6. Block quotes
Markdown:
Use a greater than sign in column 1 to make block quotes like this:
> Line 1
> Line 2
Result:
Use a greater than sign in column 1 to make block quotes like this:
Line 1 Line 2
7. Bullet lists
Markdown:
Just put a asterisk or dash in front of a line that should be bulleted.
* Here is an item starting with an asterisk
* Here is another item starting with an asterisk
* Indent to make sub-bullets
* Like this
– Here is an item with a dash
– Changing characters makes a new list.
Result:
Just put a asterisk or dash in front of a line that should be bulleted.
- Here is an item starting with an asterisk
- Here is another item starting with an asterisk
- Indent to make sub-bullets
- Here is an item with a dash
- Changing characters makes a new list.
8. Numbered lists
Markdown:
1. Beginning a line with a number makes it a list item.
1. You don’t need to put a specific number; Markdown will renumber for you
8. This is handy if you move items around
1. Don’t forget you can indent to get sub-items
1. Or sub-sub-items
1. Another item
Result:
- Beginning a line with a number makes it a list item.
- You don’t need to put a specific number; Markdown will renumber for you
- This is handy if you move items around
- Don’t forget you can indent to get sub-items
- Or sub-sub-items
- Another item
9. Code samples
Many markdown implementations know how to format code by language. (This article was written in Markdown and made extensive use of this feature using “markdown” as the language!) For example to show some HTML:
Markdown:
~~~html
<button type=“button“>Do not push this button</button>
~~~
Result:
<button type=“button”>Do not push this button</button>
10. Tables
Tables are not universally supported but they’re so useful they had to be part of this article. Here is a simple table. Separate columns with pipe characters, and don’t worry about making things line up; Markdown will handle that part for you.
Markdown:
Column 1 | Column 2 | Column 3
—|—|—
Value 1a | Value 2a | Value 3a
Value 1b | Value 2b | Value 3b
Result:
Column 1 |
Column 2 |
Column 3 |
|---|
Value 1a |
Value 2a |
Value 3a |
Value 1b |
Value 2b |
Value 3b |
HTML and Markdown
Markdown doesn’t create any old formatted text – it specifically creates HTML. In fact, it was designed as a shorthand for HTML that is easier for humans to read and write.
Many Markdown implementations allow you to insert HTML directly into the middle of your Markdown; this may be limited to certain HTML tags depending on the application. So if you know HTML and you’re not sure how to format something in Markdown, try including the HTML directly!
Editing Markdown
If you’d like to play with Markdown right now, you might like to try the Markdown Previewer where you can type and preview Markdown using any web browser.
For more serious editing, Visual Studio Code does a great job, and has a built-in preview facility. Check the VS Code Markdown documentation for details.
There’s a whole ecosystem of tools around Markdown including converters for Microsoft Word and stand-alone editing apps; these are really too numerous to list but are easy to find by searching the web.
Legacy
From vinyl records to 8-bit games and static web sites, there’s a trend these days to rediscover simpler technologies from the past. Markdown definitely falls into this category.
Back before “WYSIWYG” (What You See Is What You Get) word processors were cheap and pervasive, there were “runoff” utilities that were very much like Markdown. They turned text files into nicely formatted printed documents (usually Postscript). Markdown harkens back to these legacy tools, but adds HTML compatibility and an intuitive syntax.
Conclusion
While it may seem unfamiliar at first, Markdown is intended to make it easy for people to read and write HTML. Whether you’re a power user, IT admin, or developer, you’re bound to run into Markdown sooner or later. Here’s hoping this article makes it a little easier to get started!
by Contributed | May 4, 2021 | Technology
This article is contributed. See the original author and article here.
Performance tuning is often harder than it should be. To help make this task a little easier, the Azure Cognitive Search team recently released new benchmarks, documentation, and a solution that you can use to bootstrap your own performance tests. Together, these additions will give you a deeper understanding of performance factors, how you can meet your scalability and latency requirements, and help set you up for success in the long term.
The goal of this blog post is to give you an overview of performance in Azure Cognitive Search and to point you to resources so you can explore the concept more deeply. We’ll walk through some of the key factors that determine performance in Azure Cognitive Search, show you some performance benchmarks and how you can run your own performance tests, and ultimately provide some tips on how you can diagnose and fix performance issues you might be experiencing.
Key Performance Factors in Azure Cognitive Search
First, it’s important to understand the factors that impact performance. We outline these factors in more depth in this article but at a high level, these factors can be broken down into three categories:
It’s also important to know that both queries and indexing operations compete for the same resources on your search service. Search services are heavily read-optimized to enable fast retrieval of documents. The bias towards query workloads makes indexing more computationally expensive. As a result, a high indexing load will limit the query capacity of your service.
Performance benchmarks
While every scenario is different and we always recommend running your own performance tests (see the next section), it’s helpful to have a benchmark for the performance you can expect. We have created two sets of performance benchmarks that represent realistic workloads that can help you understand how Cognitive Search might work in your scenario.
These benchmarks cover two common scenarios we see from our customers:
- E-commerce search – this benchmark is based on a real customer, CDON, the Nordic region’s largest online marketplace
- Document search – this benchmark is based on queries against the Semantic Scholar dataset
The benchmarks will show you the range of performance you might expect based on your scenario, search service tier, and the number of replicas/partitions you have. For example, in the document search scenario which included 22 GB of documents, the maximum queries per second (QPS) we saw for different configurations of an S1 can be seen in the graph below:

As you can see, the maximum QPS achieved tends to scale linearly with the number of replicas. In this case, there was enough data that adding an additional partition significantly improved the maximum QPS as well.
You can see more details on this and other tests in the performance benchmarks document.
Running your own performance tests
Above all, it’s important to run your own performance tests to validate that your current setup meets your performance requirements. To make it easier to run your own tests, we created a solution containing all the assets needed for you to run scalable load tests. You can find those assets here: Azure-Samples/azure-search-performance-testing.
The solution assumes you have a search service with data already loaded into the search index. We provide a couple of default test strategies that you can use to run the performance test as well as instructions to help you tailor the test to your needs. The test will send a variety of queries to your search service based on a CSV file containing sample queries and you can tune the query volume based on your production requirements.
Apache JMeter is used to run the tests giving you access to industry standard tooling and a rich ecosystem of plugins. The solution also leverages Azure DevOps build pipelines and Terraform to run the tests and deploy the necessary infrastructure on demand. With this, you can scale to as many worker nodes as you need so you won’t be limited by the throughput of the performance testing solution.

After running the tests, you’ll have access to rich telemetry on the results. The test results are integrated with Azure DevOps and you can also download a dashboard from JMeter that allows you to see a range of statistics and graphs on the test results:

Improving performance
If you find your current levels of performance aren’t meeting your needs, there are several different ways to improve performance. The first step to improve performance is understanding why your service isn’t performing as you expect. By turning on diagnostic logging, you can gain access to a rich set of telemetry about your search service—this is the same telemetry that Microsoft Azure engineers use to diagnose performance issues. Once you have diagnostic logs available, there’s step by step documentation on how to analyze your performance.
Finally, you can check out the tips for better performance to see if there are any areas you can improve on.
If you’re still not seeing the performance you expect, feel free to reach out to us at azuresearch_contact@microsoft.com.
by Contributed | May 4, 2021 | Technology
This article is contributed. See the original author and article here.
How to build a fast, scalable data system on Azure SQL Database Hyperscale. Hyperscale’s flexible architecture scales with the pace of your business to process large amounts of data with a small amount of compute in just minutes, and allows you to back up data almost instantaneously.

Zach Fransen, VP of data and AI at Xplor, joins Jeremy Chapman to share how credit card processing firm, Clearent by Xplor, built a fast, scalable merchant transaction reporting system on Azure SQL Database Hyperscale. Take a deep dive on their Hyperscale implementation, from their approach with micro-batching to continuously bring in billions of rows of transactional data, from their on-premises payment fulfillment system at scale, as well as their optimizations for near real-time query performance using clustered column store indexing for data aggregation.
QUICK LINKS:
00:35 — Intro to Clearent
01:33 — Starting point and challenges
03:12 — Clearant’s shift to Hyperscale
04:53 — Near real-time reporting/micro-batching
06:25 — See it in action
08:28 — Processing large amounts of data
09:42 — Namd replicas
10:34 — Query speed ups — clustered column store indexing
11:45 — What’s next for Clearent by Xplor?
12:26 — Wrap up
Link References:
Learn more about Clearent by Xplor and what they’re doing with Hyperscale at https://aka.ms/ClearentMechanics
For more guidance on implementing Azure SQL Database Hyperscale, check out https://aka.ms/MechanicsHyperscale
Unfamiliar with Microsoft Mechanics?
We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at #Microsoft.
Keep getting this insider knowledge, join us on social:
Video Transcript:
– Up next, we meet with credit card processing firm Clearent by Xplor to see how they built a fast, scalable merchant transaction reporting system on Azure SQL Database Hyperscale. From their approach with micro-batching to continuously bring in billions of rows of transactional data from their on-premises payment fulfillment system at scale, as well as their optimizations for near real-time query performance using clustered columnstore indexing for data aggregation and much more. So today, I’m joined by Zach Fransen, who’s the VP of data and AI at Xplor. So welcome to Microsoft Mechanics.
– Thanks. It’s great to be on the show.
– Thanks so much for joining us today. It’s really great to have you on. So if you’re new to Clearent by Xplor, they’re a US-based payment solution provider that enables merchants and retailers to accept credit card payments from anywhere. They process over 500 million transactions per year. And since their inception in 2005, they’ve built several intelligent solutions to reduce credit card processing fees and significantly speed up the fulfillment of credit card payments with next day funding. Now key to this has been connecting their back-end payment fulfillment system with their transaction reporting system. In fact, they’ve recently completed an app and data modernization effort using Azure SQL Database Hyperscale. Their new Compass Online Reporting Suite gives their customers unprecedented and concurrent access to sales transaction data stored on Azure SQL Hyperscale for near real-time views of revenue, sales trends, and much more. So Zach, to put this effort into context, can you help us understand where you were coming from?
– Sure. So when we started this effort, we were fully on-premises. We had two monolithic legacy apps, Horizon and Compass, running off the same Clearent database. Payments from merchants came in from a legacy third-party mainframe payment gateway and landed in Horizon as flat files. Horizon takes care of qualification, billing, and settling merchant payments. Part of our value add is to pay our merchant customers ahead of the credit agencies. When merchants batch in their sales at the end of the day, the system qualifies the nature of the transaction by checking things like if the transaction was made online or from a card reader via a chip, magnetic strip, or a rewards card. These are all things that determine the cost of the transaction and the fees charged by Visa, Mastercard, Amex and other credit agencies. We also charge a derivative fee based on that, so this is an intricate process to get right. In parallel, our Compass reporting system running on IIS had a SQL Server always on cluster on the same Clearent database back-end that would pull data from Horizon using a stored procedure to transform the data into reporting schemas for merchants to get visibility into their transactions.
– Okay, so how often were you able to pull the data for reporting?
– Well, we did a two-day lookback once a day, so we were having to process two full days of data every day for reporting and we were always a day behind. There was also contention because we were using the same database for processing transactions and reporting. And as our data and customers grew, our run times for processing the data coming into Horizon took longer. Also, concurrent customer queries in Compass would sometimes fail.
– That’s not too surprising. Large database operations like these are prone to performance issues and storage limitations. Plus, you had to find a better way to aggregate the data and make it more available for reporting as well. So what approach then did you take to remediate some of these issues?
– Well, we adopted a hybrid approach which allowed us to keep our data on-premises but migrate the data that we needed for reporting into the Cloud. As our foundation was SQL Server, we wanted to continue to use that skill set and choose an option where we didn’t have to worry about storage limits or syntax changes. We went with the Azure SQL Database Hyperscale service tier that scales to up to 100TB of storage as the reporting back-end for our new Compass Online Reporting Suite. We also modernized the Compass App, taking an extensible microservices-based approach. We built an angular app that calls dozens of services that sit both on-prem and in the Cloud, depending on what it’s trying to do. And we also fragmented our data model to match our service topology. Then, to more consistency bring data in, we built our own payment gateway called Quest, which comprises a consistent set of APIs to pull in merchant data from physical credit card terminals, e-commerce, or integrated payment systems in use by our merchant customers. In fact, as we made it easier to bring data in and broke the Compass app and data monolith apart, Hyperscale’s role in providing a single hub pulling all the data together became even more important. Additionally, we only keep active data on premises, so Hyperscale is critical in keeping a persistent record of all our historical data and to maintaining a lower data footprint on-prem in those transactional systems.
– Okay, so now you’ve got a more scalable and agile app and data layer, but with so many transactions that are happening at any one time, how would you solve for getting the data then into Hyperscale fast enough to support near real-time reporting for your merchants?
– So the game changer for us was adopting low-latency, intraday batch processing, and data streaming. We’ve split our Hyperscale instance into two logical layers, comprising an Operational Data Store and an Enterprise Data Warehouse. And we shifted from once-a-day batch processing to micro-batch processing where we use SQL Server’s Change Data Capture feature to poll our disparate systems for the latest change in data every two minutes. We have data extractors running on our on-premises databases reading our change capture tables, and we have a set of configuration tables sitting in our operational data store that tells it what columns to read. The raw data along with the change history for every table is then loaded into Hyperscale over Express Route into our ODS. And we only extract and load the delta tables in our Enterprise Data Warehouse for reporting and analytics. At the same time, as a credit card transaction is made, Quest immediately places it on a queue, and we use the SQL Server JSON parsing capability to extract out the pieces of information from the message that we are interested in, such as the date, time, and amount of the transaction, and then stream it directly into our star schemas in Hyperscale for reporting so that the data is available to merchants right away.
– This really sounds like a great solution. Can we see it in action?
– Sure. So here is the Compass Online Reporting Suite. On the homepage, you get a quick summary of all of your activity from the last recent deposit through to trends in sales volume. From there, you can drill in to see the detail. I’ll click into one of my deposits. For example, one thing that is useful for merchants is being able to deconstruct a deposit that may just show up as one line item on their bank account statement. In Compass, they can drill in to see the detail behind the deposit whether that’s multiple terminals or multiple batches because Hyperscale is bringing all that data together. Now let me show you what happens as soon as the transaction is made. We’re in the transaction view within a batch. I can see all my recent transactions in this batch. Now I’ll show the process of running a transaction along with where we can view it. I’m in the virtual terminal built on Quest, but this could be a physical terminal or another integrated payment system. I’ll make a credit card transaction. And you’ll see it succeeds in the virtual terminal in under a second. Now I’ll hop over to Compass into our batch view, hit refresh, and you’ll see the payment is pending right away. That’s made possible by Hyperscale instantaneously consuming the data from our messaging queue. Finally, I’ll hop over to my query in SQL Server Management Studio and I’ll query this batch in our Hyperscale database, and you’ll see the results are also instantly available. Next, if we look into our list of settled transactions, you’ll see that once the merchant batches in at the end of the day if they click into a deposit, it shows the amounts and fees along with the net deposit amount and when payment will be posted. I’ll click in to see transactions in this batch. Transaction data is also enriched by the data from our micro-batch processing, as you can see here by optional column views available. I’ll select Product and POS Entry Mode. You’ll see that information show up in our view and these are critical elements that determine the fees for the transaction.
– Okay, so this is an example then of just one transaction, but what if you have maybe 50,000 or more merchants that are running millions of transactions a day?
– Yeah. We process around 1 billion rows of data per month on our Hyperscale implementation right now. As you can see here in the portal, this has amounted to over 28TB. And a big advantage of Hyperscale is that the compute is independent to the size of your data. Let me give you another view from our monitoring tool. Here, the green chart represents Hyperscale. It’s processing a large quantity of billing information as the micro-batch comes in at intervals. It’s actually spiking up to almost 100% utilization, which is around 200,000 IOPS, and incidentally this is only using 8 cores right now which is a relatively small amount of compute. We can elastically scale compute when we need to process a large amount of data or build an index on a giant table with all the history and to scale it up. It only takes a few minutes. Also in Hyperscale, backups are near instantaneous. It’s constantly taking snapshots of your data, which helps significantly with data restore. Through PowerShell, we’ve automated restore when we do full-scale load testing, and that only takes like an hour, even though it’s a 28TB database.
– Okay, so how else then have you configured your Hyperscale implementation?
– Secondary replicas. Not just read-only replicas, but named replicas have been game changing for us. This allowed us to keep our primary instance for our compute intensive ETL and data processing. We can then assign name replicas for specific purposes like analytics and reporting. These tap into a copy of the same data, but have their own dedicated compute. If you recall, when I showed you the slider screen, we had a secondary replica configured for customer traffic, which is part of a scalable pool of replicas for load balancing. If I switch tabs, you can see that we also have a named replica configured to handle analytics traffic. Routing traffic to these replicas is as simple as changing the connection string. This means that critical customer workloads are not interrupted by our compute intensive analytic operations.
– So with such massive amounts of data that are coming in, beyond assigning the right amount of compute, are you doing anything special then to speed up query operations?
– Yes, we do a few things. For example, one of the things we’ve really taken advantage of is clustered columnstore index for data aggregation. A big advantage of Azure SQL Hyperscale is that it’s very flexible. It can handle a variety of different types of traffic and does really rapid aggregation. Here, we have an example of an analytical workload where we are going to take a very large number of records and produce some aggregates. In this particular table, we have a little over 3 billion rows of data. To see some trending, I have a query that will produce a time series by collecting data from different months, which in this case, are in different partitions in a clustered columnstore index. When I run it, you can see it is able to aggregate records across three months in 2021, and it’s almost instantaneous. Another way to look at the data is to look at a cross-section using dimensional slices, like a state. And here, we see the total number of transactions in New York and Washington. So we’re able to look across billions of rows of data and pull these aggregates together in fractions of a second.
– That’s super impressive. And now you’ve transformed then your app and data stack for reporting with Azure SQL Database Hyperscale, but what’s next then for Clearent by Xplor?
– So given our recent merger with the transaction services group to form Xplor, we are taking our Clearent platform international to deliver a service called Xplor Pay with Azure SQL Database Hyperscale as our reporting back-end. And now that we’ve got our data infrastructure to this level of maturity, the next logical step is to leverage Azure Machine Learning. This will help offer additional value-added services to our customers, such as helping them to understand when to run specific promotions based on existing sales.
– Thanks so much, Zach, for joining us today and also giving us the deep dive on your Hyperscale implementation. In fact, to learn more about Clearent by Xplor and what they’re doing with Hyperscale, check out, aka.ms/ClearentMechanics. And for more guidance on implementing Azure SQL Database Hyperscale, check out aka.ms/MechanicsHyperscale. Of course, don’t forget to subscribe to our channel if you haven’t already. Thanks so much for watching. We’ll see you next time.
Recent Comments