Upload Files to Azure Blob Storage using Power Automate Desktop

Upload Files to Azure Blob Storage using Power Automate Desktop

This article is contributed. See the original author and article here.

In this blog post, we are going to have a look at how you can automatically upload files to an Azure Blob storage account using Power Automate Desktop. Power Automate Desktop is a great tool to build automation on your desktop. You can create flows, interact with everyday tools such as email and excel and work with modern and legacy applications.


 


For example, you can automate tasks like:



  • Quickly organize your documents using dedicated files and folders actions

  • Accurately extract data from websites and store them in excel files using Web and Excel automation

  • Apply desktop automation capabilities to put your work on autopilot.


Now want tasks I want to build some automation is to upload files to an Azure Blob Storage account for long-term storage. These can be small and large files, in my cases I wanted to backup all my large video files to an Azure blob Storage account.


To learn more about Power Automate check out Microsoft Docs.


 


Preparation


 


Install Power Automate Desktop (it is free)


You can download Power Automate Desktop from here.


Sign in to the Power Automate Desktop Windows application using one of the following accounts and automate your tedious tasks.



A full comparison of the features included in each account can be found here.


 


Create an Azure Storage account


Secondly you create a Storage account in Azure. An Azure storage account provides you to host all of your Azure Storage data objects: blobs, files, queues, and tables. For more information about Azure storage accounts, see Storage account overview.


To create an Azure storage account just follow these steps on Microsoft Docs: Create a storage account.


 


Download AzCopy


Since I am dealing with large files, I decided to use the AzCopy utility. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. Get started with AzCopy.


 


Create Power Automate Desktop Flow


After we prepared everything, we can now start to create the flow in Power Automate Desktop.


Power Automate Desktop Flow - Upload to Azure Blob Storage using AzCopyPower Automate Desktop Flow – Upload to Azure Blob Storage using AzCopy


 


First, I create the following variables within the flow.



  • UploadFolder – This is the folder where I place my files, which I want to be uploaded

  • UploadedFolder – This is the folder where the file gets moved after it has been uploaded

  • AzCopy – This is the path where I saved the azcopy.exe

  • AzureBlobSAS – This is the URI for the Azure Blob Storage account including the Shared access signature (SAS) token


 


To generate the URI with the SAS (Shared access signature) token, go to the Azure Portal to your Azure storage account. Go to containers and create a new container. Open the container and on the and navigate to Shared access signature. Select add, create, and write permission, change the time if needed, and press Generate SAS token and URL. Copy the Blob SAS URL and save it as the variable in the flow.


 


Azure Storage Account SAS TokenAzure Storage Account SAS Token


 


IMPORTANT: When you add the SAS URL to the variable you will need to make all the % to %% because of how Power Automate Desktops names variables.


 


Since we want to use the AzCopy utility to copy the files to the Azure Blob storage, you can now add the “Run PowerSheel script” action with the following PowerShell code:


 


 

%AzCopy% copy "%UploadFolder%" "%AzureBlobSAS%" --recursive=true

 


 


Run PowerShell scriptRun PowerShell script


 


With the last step, we are going to move the uploaded files to another folder.


Move FilesMove Files


 


Conclusion


I hope this blog post provides you with a quick overview of how you can upload files to an Azure Blob storage account using Power Automate. There are of course other ways on how to do this but the great thing here is that you can easily upload large files and add more actions to your Power Automate Desktop Flow. If you have any questions, feel free to leave a comment below.

Community sample: Engage your users with SharePoint stories/reels

Community sample: Engage your users with SharePoint stories/reels

This article is contributed. See the original author and article here.

Would not be cool to engage your Modern Workplace users with content appearing like in your favourite social network? In my latest community sample, I built an SPFx webpart to do so. Here is how I did it, but first, this is how it looks:


 

SharePoint stories webpartSharePoint stories webpart


What we need first, is a SharePoint list that will contain all the “story images”, with the author of that story, and some Text, if we want to show the “show more” option. This list will be something like this:


SP ListSP List


 

Now it is time to code our SPFx webpart.


Before starting, for all the UI thing, I am using an existing open-source React component called “react-insta-stories”, that you can find in its GitHub repository. This component does most of the hard work with the image slide and so on. In its most simple way, the component just needs an array of images:


 

react packagereact package


 


But you can also specify an array of Story objects, where a Story can have the following properties:


 

Story object propertiesStory object properties


 


Now that we know how to use the Stories component, the webpart functionality is quite easy. We just need to get the Stories information from the SharePoint list, and compose the proper Stories array.


As usual when developing SPFx webparts, the webpart itself, just loads a React component, passing the information that we need, in this case, for simplicity, I am passing the entire WebPartContext object, but try to avoid this practice, and only pass what you need.


This is the main code in the Render webpart method:


 

SPFx Webpart renderSPFx Webpart render


 


Once in the main React component, we are calling the SharePoint REST API to get the sotries from the list. To do so, I am using the endpoint:


 

/_api/web/lists/GetByTitle('Stories')/RenderListDataAsStream

 


As this endpoint is given me the Image URL in the format that I need (but pretty sure you can do the same with other endpoints, or using the PnP JS library). The code to do so is:


 

componentDidMountcomponentDidMount


 


The method “_getStoryFromListItem” will create a Story object for the “react-insta-stories” component, and here we have an interesting challenge. The Story object, has a Header property, aimed to render the Story author information, so you just provide the profile image, and a couple of texts for heading and subheading. Although we could get the Author profile image, username and email using Graph API, it is going to be much easier to make use of the MS Graph Toolkit library, and use the MGT Person component. In order to render the GMT Person component, we cannot use the Story Header property, however, the Story object allow us to specify a custom render function for the entire Story, and in that function, we can use the Person component. This is the relevant code to achieve it:


 

Story custom render functionStory custom render function


 


The storyRenderer function is the one responsible for rendering the Story, and there, we use the GMT Person component. As you can see in the code above, we also use a React High Order Component called WithSeeMore, this component is from the react-insta-stories library and is the way to load a specific text when the “See more” link is clicked in the Story. So, if the list item has the Content field filled, we set the “seeMore” property of the Story object. This property is again a function, so you can customize how the content is rendered.


 


And that´s all!… you can get the full code sample in the PnP GitHub repository


 


Cheers!

Troubleshooting Node down Scenarios in Azure Service Fabric – Part I

Troubleshooting Node down Scenarios in Azure Service Fabric – Part I

This article is contributed. See the original author and article here.

Node may go down for several reasons, please find the probable causes for Nodes going down in Service Fabric Cluster.


 


Scenario#1:


Check the Virtual Machine associated with the Node exists or Deleted or Deallocated.


Azure Portal-> VMSS Resource -> Instances


reshmav_4-1620197902798.png


If Virtual machine doesn’t exist, then one must perform either of below to Remove node state from Service Fabric cluster.


From SFX:



  • Go to the service fabric explorer of the cluster.

  • Check the Advanced mode setting check box on the cluster:


reshmav_1-1620197782764.png



  • Then click on Ellipsis (…) of the down nodes to have the “Remove node state” options and click on it. This should remove node state from the cluster. 


 


From PS Command:


PS cmd: Remove-ServiceFabricNodeState -NodeName _node_5 -Force


Reference: https://docs.microsoft.com/en-us/powershell/module/servicefabric/remove-servicefabricnodestate?view=azureservicefabricps


 


Scenario#2:


Check if Virtual machine associate with the node is healthy in VMSS.


Go to Azure Portal-> VMSS Resource -> Instances -> Click on the Instance -> Properties


reshmav_5-1620197937089.png


If Virtual Machine Guest Agent is “Not Ready” then reach out to Azure VM Team for the RCA.


 


Possible Mitigation:



  • Restart the Virtual machine from VMSS blade.

  • Re-image the Virtual Machine.


 


Scenario#3:


Check the performance of the Virtual Machine-like CPU and Memory.


reshmav_3-1620197782789.png


 


If the CPU or Memory is High, then Fabric related process will not be able to establish any instances/start the instances causing the node to go down.


 


Mitigation:



  • Check which process is consuming high CPU/Memory from the Task Manager to investigate the root cause and fix the issue permanently.


Collect the dumps using below tool to determine the root cause:


DebugDiag:


Download Debug Diagnostic Tool v2 Update 3 from Official Microsoft Download Center


 


(or) Procdump:


ProcDump – Windows Sysinternals | Microsoft Docs



  • Restart the Virtual machine from VMSS blade.


 


Scenario#4:


Check the Disk usage of the Virtual Machine, no space is the disk could lead to Node down issues.


For disk space related issues, we recommend to use ‘windirstat’ tool mentioned in the article: https://github.com/Azure/Service-Fabric-Troubleshooting-Guides/blob/master/Cluster/Out%20of%20Diskspace.md to understand which folders are consuming more space.


 


Mitigation:


Free up the space to bring the Node Up.


 

Modern landing page for Visio on Office.com

Modern landing page for Visio on Office.com

This article is contributed. See the original author and article here.

We are pleased to announce the launch of the new Visio start page on Office.com, providing Visio users with a familiar start experience that is similar to other Office 365 apps and powerful Office features, like improved file discovery, improved collaboration features, and better template categorization.


 


From the new landing page, you will experience the benefits of the Office.com ecosystem combined with unique features of the existing Visio landing page. Below are additional details on the latest enhancements:


 



  • Improved collaboration: The new design will allow you to see recommended files with actions from your teammates like, “Anne edited this Thursday at 9:10 pm.” You can easily open files that have been recently edited, accessed, and reviewed by your colleagues and quickly pick up where your colleagues left off.


AakankshaRaj_0-1620186857599.png


 


 



  • Better discovery of existing files: With the addition of My recent and Shared files, you can access your frequently used Visio files easily. There is also an option to add your files to the Favorites section for quick access by clicking on the star icon adjacent to the file.


LP2.png


  



  • Better categorization of templates and sample diagrams: The new experience provides a set of templates at the top of the start page to quickly create new Visio diagrams.


LP3.png


 


To access the full set of templates and sample diagrams, click on More templates, which will direct you to the “More templates” page. Here, you will see the vast repertoire of Visio’s templates and sample diagrams. Navigate to the desired template and click on the Create button to design your new Visio diagram quickly. design your new Visio diagram quickly.


LP4.png


  



  • New search experience: The new search experience will enable you to search quickly through Visio’s rich library of templates and sample diagrams using relevant search terms to help facilitate quicker file creation.


 


 


LP5.png


 


 



  • Overall performance improvements: With the new landing page, you will also experience vastly improved app performance when opening existing or creating new Visio files, reduced time to create new files and so on.


 


How to access the new landing page


If you have a Visio Plan 1 or Visio Plan 2 subscription, you can access the new landing page via any of the below entry points:


 



  • Click on the waffle menu in the top left. Then, click on All apps and search for “Visio.”


LP6.png


 


 


 



  • Search for “Visio” in the universal search box at the top of the page and click on the Visio icon under the Apps header of the search results dropdown.


LP7.png


 


 


 



  • Click on the All apps icon in the left navigation bar. Then, click on the Visio app tile under the Office 365 tab.


LP8.png


 


This experience will be rolling out gradually to our users , so stay tuned to experience the new start page soon!


 


New announcements are coming your way shortly, so keep checking the Visio Tech Community blog and please continue to send us your product feedback and ideas through UserVoice.

What's up with Markdown?

What's up with Markdown?

This article is contributed. See the original author and article here.

What’s Up with Markdown?


whats-up-with-markdown.png


Perhaps you’ve noticed a technology called Markdown that’s been showing up in a lot of web sites and apps lately. This article will explain Markdown and help you get started reading and writing it.


 


Markdown is a simple way to format text using ordinary punctuation marks, and it’s very useful in Microsoft 365. For example, Microsoft Teams supports markdown formatting in chat messages and SharePoint has a Markdown web part. Adaptive Cards support Markdown as well, as do Power Automate approvals. For the bot builders among us, Bot Composer language generation and QnA Maker both support markdown as well. And what’s at the top level of nearly every Github repo? You guessed it, a markdown file called README.md.


 


Designed to be intuitive


Imagine you’re texting someone and all you have to work with is letters, numbers, and a few punctuation marks. If you want to get their attention, you might use **asterisks**, right? If you’ve ever done that, then you were already using Markdown! Double asterisks make the text bold.


 


Now imagine you’re replying to an email and want to quote what someone said earlier in the thread. Many people use a little greater-than sign like this:


 



Parker said,
> Sharing is caring


Guess what, that’s Markdown too! When it’s displayed, it looks like this:


 


Parker said,



Sharing is caring



Did you ever make a little table with just text characters, like this?


 



Alpha | Beta | Gamma
——|——|——
1 | 2 | 3


If so, you already know how to make a table in Markdown!


 















Alpha Beta Gamma
1 2 3


Markdown was designed to be intuitive. Where possible, it uses the formatting clues people type naturally. So you can type something _in italics_ on the screen and it actually appears in italics.


In all cases you’re starting with plain text – the stuff that comes out of your keyboard and is edited with Notepad or Visual Studio Code – into something richer. (Spoiler alert: it’s HTML.)


 



What about emojis? :smile: Markdown neither helps nor blocks emojis, they’re just characters. If your application can handle emojis, you can certainly include them in your markdown.



Commonly used Markdown


Markdown isn’t a formal standard, and a lot of variations have emerged. It all started at Daring Fireball; most implementations are faithful to the original but many have added their own features. For example, the SharePoint Markdown Web Part uses the “Marked” syntax; if you’re creating a README.md file for use in Github, you’ll want to use Github Flavored Markdown (GFM).


 


This article will stick to the most commonly used features that are likely to be widely supported. Each section will show an example of some markdown and then the finished rendering (which, again, may vary depending on what application you’re using).


 


Each of the following sections shows an example of some simple Markdown, followed by the formatted result.


 


1. Emphasizing Text


 


Markdown:


You can surround text with *single asterisks* or _single underscores_ to emphasize it a little bit;
this usually formatted using italics.

You can surround text with **double asterisks** or __double underscores__ to emphasize it more strongly;
this is usually formatted using bold text.



 

Result:

You can surround text with single asterisks or single underscores to emphasize it a little bit; this usually formatted using italics.


You can surround text with double asterisks or double underscores to emphasize it more strongly; this is usually formatted using bold text.


 


2. Headings


You can make headings using by putting several = (for a level 1 heading) or – signs (for a level 2 heading) in the line below your heading text.


 


Markdown:


My Heading


 

Result:

My Heading


You can also make headings with one or more hash marks in column 1. The number of hash marks controls the level of the heading.


Markdown:


# First level heading
## Second level heading
### Third level heading
etc.


 

Result:

First level heading


Second level heading


Third level heading


etc.


 


3. Hyperlinks


 


Markdown:


To make a hyperlink, surround the text in square brackets
immediately followed by the URL in parenthesis (with no space in
between!) For example:
[Microsoft](https://www.microsoft.com).


 

Result:

To make a hyperlink, surround the text in square brackets immediately followed by the URL in parenthesis (with no space in between!) For example: Microsoft.


 


4. Images


Images use almost the same syntax as hyperlinks except they begin with an exclamation point. In this case the “alt” text is in square brackets and the image URL is in parenthesis, with no spaces in between.


 


Markdown:


![Parker the Porcupine](https://pnp.github.io/images/hero-parker-p-800.png)


Result:

 


In case you were wondering, you can combine this with the hyperlink like this:


 


Markdown:


[![Parker the Porcupine](https://pnp.github.io/images/hero-parker-p-800.png)](http://pnp.github.io)


 

Result:

hero-parker-p-800.png


 


5. Paragraphs and line breaks


 


Markdown:


Markdown will
automatically
remove
single line breaks.

Two line breaks start a new paragraph.



 

Result:

Markdown will automatically remove single line breaks.


Two line breaks start a new paragraph.


 


6. Block quotes


Markdown:


Use a greater than sign in column 1 to make block quotes like this:

> Line 1
> Line 2



 

Result:

Use a greater than sign in column 1 to make block quotes like this:



Line 1 Line 2



 


7. Bullet lists


Markdown:


Just put a asterisk or dash in front of a line that should be bulleted.

* Here is an item starting with an asterisk
* Here is another item starting with an asterisk
* Indent to make sub-bullets
* Like this
Here is an item with a dash
Changing characters makes a new list.



 

Result:

Just put a asterisk or dash in front of a line that should be bulleted.



  • Here is an item starting with an asterisk

  • Here is another item starting with an asterisk

    • Indent to make sub-bullets

      • Like this







  • Here is an item with a dash

    • Changing characters makes a new list.




 


8. Numbered lists


Markdown:


1. Beginning a line with a number makes it a list item.
1. You don’t need to put a specific number; Markdown will renumber for you
8. This is handy if you move items around
1. Don’t forget you can indent to get sub-items
1. Or sub-sub-items
1. Another item


 

Result:


  1. Beginning a line with a number makes it a list item.

  2. You don’t need to put a specific number; Markdown will renumber for you

  3. This is handy if you move items around

    1. Don’t forget you can indent to get sub-items

      1. Or sub-sub-items





  4. Another item


 


9. Code samples


Many markdown implementations know how to format code by language. (This article was written in Markdown and made extensive use of this feature using “markdown” as the language!) For example to show some HTML:


 


Markdown:


    ~~~html
<button type=button>Do not push this button</button>
~~~


 

Result:

<button type=button”>Do not push this button</button>


 


10. Tables


Tables are not universally supported but they’re so useful they had to be part of this article. Here is a simple table. Separate columns with pipe characters, and don’t worry about making things line up; Markdown will handle that part for you.


Markdown:


Column 1 | Column 2 | Column 3
—|—|—
Value 1a | Value 2a | Value 3a
Value 1b | Value 2b | Value 3b


 

Result:





















Column 1 Column 2 Column 3
Value 1a Value 2a Value 3a
Value 1b Value 2b Value 3b

HTML and Markdown


Markdown doesn’t create any old formatted text – it specifically creates HTML. In fact, it was designed as a shorthand for HTML that is easier for humans to read and write.


Many Markdown implementations allow you to insert HTML directly into the middle of your Markdown; this may be limited to certain HTML tags depending on the application. So if you know HTML and you’re not sure how to format something in Markdown, try including the HTML directly!


Editing Markdown


If you’d like to play with Markdown right now, you might like to try the Markdown Previewer where you can type and preview Markdown using any web browser.


For more serious editing, Visual Studio Code does a great job, and has a built-in preview facility. Check the VS Code Markdown documentation for details.


 


There’s a whole ecosystem of tools around Markdown including converters for Microsoft Word and stand-alone editing apps; these are really too numerous to list but are easy to find by searching the web.


Legacy


From vinyl records to 8-bit games and static web sites, there’s a trend these days to rediscover simpler technologies from the past. Markdown definitely falls into this category.


Back before “WYSIWYG” (What You See Is What You Get) word processors were cheap and pervasive, there were “runoff” utilities that were very much like Markdown. They turned text files into nicely formatted printed documents (usually Postscript). Markdown harkens back to these legacy tools, but adds HTML compatibility and an intuitive syntax.


Conclusion


While it may seem unfamiliar at first, Markdown is intended to make it easy for people to read and write HTML. Whether you’re a power user, IT admin, or developer, you’re bound to run into Markdown sooner or later. Here’s hoping this article makes it a little easier to get started!

Azure Cognitive Search performance: Setting yourself up for success

Azure Cognitive Search performance: Setting yourself up for success

This article is contributed. See the original author and article here.

Performance tuning is often harder than it should be. To help make this task a little easier, the Azure Cognitive Search team recently released new benchmarks, documentation, and a solution that you can use to bootstrap your own performance tests. Together, these additions will give you a deeper understanding of performance factors, how you can meet your scalability and latency requirements, and help set you up for success in the long term.


 


The goal of this blog post is to give you an overview of performance in Azure Cognitive Search and to point you to resources so you can explore the concept more deeply. We’ll walk through some of the key factors that determine performance in Azure Cognitive Search, show you some performance benchmarks and how you can run your own performance tests, and ultimately provide some tips on how you can diagnose and fix performance issues you might be experiencing.


 


Key Performance Factors in Azure Cognitive Search


First, it’s important to understand the factors that impact performance. We outline these factors in more depth in this article but at a high level, these factors can be broken down into three categories:



It’s also important to know that both queries and indexing operations compete for the same resources on your search service. Search services are heavily read-optimized to enable fast retrieval of documents. The bias towards query workloads makes indexing more computationally expensive. As a result, a high indexing load will limit the query capacity of your service.


 


Performance benchmarks


While every scenario is different and we always recommend running your own performance tests (see the next section), it’s helpful to have a benchmark for the performance you can expect. We have created two sets of performance benchmarks that represent realistic workloads that can help you understand how Cognitive Search might work in your scenario.


 


These benchmarks cover two common scenarios we see from our customers:



  • E-commerce search – this benchmark is based on a real customer, CDON, the Nordic region’s largest online marketplace

  • Document search – this benchmark is based on queries against the Semantic Scholar dataset


The benchmarks will show you the range of performance you might expect based on your scenario, search service tier, and the number of replicas/partitions you have. For example, in the document search scenario which included 22 GB of documents, the maximum queries per second (QPS) we saw for different configurations of an S1 can be seen in the graph below:


DerekLegenzoff_0-1620166702313.png


 


 


As you can see, the maximum QPS achieved tends to scale linearly with the number of replicas. In this case, there was enough data that adding an additional partition significantly improved the maximum QPS as well.


 


You can see more details on this and other tests in the performance benchmarks document.


 


Running your own performance tests


Above all, it’s important to run your own performance tests to validate that your current setup meets your performance requirements. To make it easier to run your own tests, we created a solution containing all the assets needed for you to run scalable load tests. You can find those assets here: Azure-Samples/azure-search-performance-testing.



The solution assumes you have a search service with data already loaded into the search index. We provide a couple of default test strategies that you can use to run the performance test as well as instructions to help you tailor the test to your needs. The test will send a variety of queries to your search service based on a CSV file containing sample queries and you can tune the query volume based on your production requirements.



Apache JMeter is used to run the tests giving you access to industry standard tooling and a rich ecosystem of plugins. The solution also leverages Azure DevOps build pipelines and Terraform to run the tests and deploy the necessary infrastructure on demand. With this, you can scale to as many worker nodes as you need so you won’t be limited by the throughput of the performance testing solution.


 


 


DerekLegenzoff_1-1620166702328.png


 


After running the tests, you’ll have access to rich telemetry on the results. The test results are integrated with Azure DevOps and you can also download a dashboard from JMeter that allows you to see a range of statistics and graphs on the test results:


DerekLegenzoff_2-1620166702362.jpeg


 


 


Improving performance


If you find your current levels of performance aren’t meeting your needs, there are several different ways to improve performance. The first step to improve performance is understanding why your service isn’t performing as you expect. By turning on diagnostic logging, you can gain access to a rich set of telemetry about your search service—this is the same telemetry that Microsoft Azure engineers use to diagnose performance issues. Once you have diagnostic logs available, there’s step by step documentation on how to analyze your performance.


 


Finally, you can check out the tips for better performance to see if there are any areas you can improve on.


 


If you’re still not seeing the performance you expect, feel free to reach out to us at azuresearch_contact@microsoft.com.


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 


 

Build fast, scalable data system on Azure SQL Database Hyperscale | Clearent

Build fast, scalable data system on Azure SQL Database Hyperscale | Clearent

This article is contributed. See the original author and article here.

How to build a fast, scalable data system on Azure SQL Database Hyperscale. Hyperscale’s flexible architecture scales with the pace of your business to process large amounts of data with a small amount of compute in just minutes, and allows you to back up data almost instantaneously.


 


Screen Shot 2021-05-04 at 4.15.45 PM.png


 


Zach Fransen, VP of data and AI at Xplor, joins Jeremy Chapman to share how credit card processing firm, Clearent by Xplor, built a fast, scalable merchant transaction reporting system on Azure SQL Database Hyperscale. Take a deep dive on their Hyperscale implementation, from their approach with micro-batching to continuously bring in billions of rows of transactional data, from their on-premises payment fulfillment system at scale, as well as their optimizations for near real-time query performance using clustered column store indexing for data aggregation.


 


 


QUICK LINKS:


 












 


Link References:





 


Unfamiliar with Microsoft Mechanics?





 


Keep getting this insider knowledge, join us on social:





Video Transcript:


 










































Consolidate data to one source with Azure SQL Managed Instance | Komatsu

Consolidate data to one source with Azure SQL Managed Instance | Komatsu

This article is contributed. See the original author and article here.

Modernize your existing data at scale, and solve for operational efficiency with Azure SQL Managed Instance. Azure SQL MI is an intelligent, scalable, cloud database service and fully managed SQL server.


 


Screen Shot 2021-05-04 at 2.16.51 PM.png


 


Nipun Sharma, lead data architect, joins Jeremy Chapman to share how the Australian subsidiary of large equipment manufacturer, Komatsu, built a scalable and proactive sales and inventory management and customer servicing model on top of Azure SQL Managed Instance to consolidate their legacy data estate on-premises. See what they did to expand their operational visibility and time to insights, including self-service reporting through integration with Power BI.


 



 







QUICK LINKS:


 











 


Unfamiliar with Microsoft Mechanics?





 


Keep getting this insider knowledge, join us on social:










Video Transcript:


 






































































 










New LTS release for Azure IoT Hub SDK for .NET

This article is contributed. See the original author and article here.

The .NET Azure IoT Hub SDK team released the latest LTS (Long-Term Support) of the device and service SDKs for .NET. This LTS version, tagged lts_2021-2-18, adds bug fixes, some improvements, and new features over the previous LTS (lts_2020-9-23), such as:


– Handle twin failures using AMQP.


– Make the DeviceClient and ModuleClient extensible.


– Install the device chain certificates using the SDK.


– Make DPS class ClientWebSocketChannel disposable.


– Use CultureInvariant for validating device connection string values.


– Reduce memory footprint of CertificateInstaller.


– Add an API to set a callback for receiving C2D.


– Make set desired property update method thread safe.


– Add support for disabling callbacks for properties and methods.


– Expose DTDL model Id property for pnp devices.


– Make payload in the invoke command API optional.


– Add APIs to get attestation mechanism.


– Improved logging for noting when the no-retry policy is enabled, in the MQTT/AMQP/HTTP transport layers, in the HttpRegistryManager, and in the AmqpServiceClient.


 


For detailed list of feature and bug fixes please consult the comparing changes with previous LTS: Comparing lts_2020-9-23…lts_2021-3-18 · Azure/azure-iot-sdk-csharp (github.com)


The following NuGet versions have been marked as LTS.



  • Microsoft.Azure.Devices: 1.31.0

  • Microsoft.Azure.Devices.Client: 1.36.0

  • Microsoft.Azure.Devices.Shared: 1.27.0

  • Microsoft.Azure.Devices.Provisioning.Client: 1.16.3

  • Microsoft.Azure.Devices.Provisioning.Transport.Amqp: 1.13.4

  • Microsoft.Azure.Devices.Provisioning.Transport.Http: 1.12.3

  • Microsoft.Azure.Devices.Provisioning.Transport.Mqtt: 1.14.0

  • Microsoft.Azure.Devices.Provisioning.Security.Tpm: 1.12.3

  • Microsoft.Azure.Devices.Provisioning.Service: 1.16.3


 


More detail on the LTS 2021-03-18 version can be found here.


 


Enjoy this new LTS version.


Eric for the Azure IoT .NET Managed SDK team

How to use Power Platform’s advanced data backend for all your apps | Dataverse

How to use Power Platform’s advanced data backend for all your apps | Dataverse

This article is contributed. See the original author and article here.

Take a closer look at Microsoft Dataverse, a managed service that securely shapes, stores, and manages any data across your business apps, from ERP systems to user generated Power Apps. Dataverse empowers citizen developers to quickly develop apps at scale, and pro developers to easily create apps that interoperate across multiple systems. Marc Mercuri, engineering lead, joins Jeremy Chapman to show how Dataverse is a great data backend for any app and any developer — without the need to architect, implement or manage it.


 


Screen Shot 2021-05-04 at 12.47.51 PM.png


 


Dataverse for Microsoft Teams:



  • Provides the data layer behind the Power Apps integration with Teams.

  • As you build data rich apps, it places text and file based data in the right data store.

  • Great for everyday, no-code apps built in Teams.

  • Scales to a million rows, or 2GB.


 


Dataverse in Power Apps Studio:



  • Works as the backend service that powers Power Apps, Power Automate, Power Virtual Agents and Power BI.

  • More capacity, control and capabilities.

  • Built on additional enterprise-grade Azure services for larger scale, data integration, relevance search, off-line support and more granular security.

  • Scales to 4TB or more.


 


Dataverse Pro-Dev:



  • Easily bring in existing data for an app.

  • Use virtual tables to directly call your remote data without importing or moving it.

  • Greater control and security options.

  • Build search into your apps and even into your processes or bots.


 


 






QUICK LINKS:


 


00:45 — What is Dataverse?


02:17 — How to get Dataverse for Teams


03:57 — How it’s built: Behind the scenes


05:05 — Dataverse in Power Apps Studio


06:17 — Additional options: Business rules


07:15 — Bring in existing data


08:45 — How to integrate Search


10:42 — Dataverse Pro-Dev


12:18 — Wrap up: How to get started


 


Link References:


For guidance and tutorials, go to https://aka.ms/DataverseMechanics


See controls for governance and compliance in action at https://aka.ms/PowerPlatformGovernance


Learn more about virtual tables at https://aka.ms/virtualtablemechanics


Get plugins and code samples at https://aka.ms/DataversePlugIns


 


Unfamiliar with Microsoft Mechanics?


We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


Keep getting this insider knowledge, join us on social:











– Up next, we’re joined by engineering lead Marc Mercuri to take a closer look at Microsoft’s Dataverse service that provides a scalable and secure managed data layer through user-generated Power Apps built in Teams all the way through to business systems built on the Power platform. So Mark, welcome back to the show.


 


– Thanks, it’s great to be back.


 


– Thanks for joining us today. So Charles Lamanna was on a few months back and he recently introduced Dataverse for Teams which greatly simplifies the maker experience by taking care of the data backend for your apps. But that said, I think a lot of people might be surprised to know that Dataverse is a broader service, it’s full service in its own right and you can also upgrade to Dataverse from Teams and it can also be used independent of Teams with the Power Platforms as your apps scale out.


 


– That’s right. Dataverse is a managed service for your data that supports any app and any developer, whether you’re a citizen developer, a little bit more seasoned or a pro dev. It removes the complexity for managing your data backend, so you can focus just on building your app. Now as you pointed out, there really are two flavors of Dataverse. First, Dataverse for Teams. Here, as you build data rich apps, it knows to place your text and file-based data in the right store. You don’t need to think about it. It scales to a million rows or two GB, and it’s great for no-code apps that might be built in Teams. Second, if you want more capacity, control or capabilities, we call the three C’s, we have the Dataverse service that works as the backend and sits under the Power Platform for Power Apps and Power Automate and Power Virtual Agents. This is built on additional enterprise-grade Azure services under the covers for data integration, relevant search, offline support, more granular security. And that scales to four TB or more. Now, let me tell you a story where this is really powerful. We recently had a customer where they wanted to have a mobile app extended to pull data in from a database, also include images. They wanted to add relevant search and have the app call a single API which they planned to create themselves. Now this required them to work with four different services, different APIs, each with their own scalability, resiliency, considerations, and metering. Now all this extra work was eliminated by using Dataverse, because it uses the same great Azure service behind the scenes and exposes it via a single API. And so they’re only using a single service.


 


– Got it, so this is pretty turnkey data management. Instead of placing together all the different å la cart components, it’s one cohesive managed service. But can you show us how to get it and the types of things that you can do once you’re using Dataverse?


 


– Sure. You can easily get started with Dataverse by using Dataverse for Teams, which is included as part of Teams enterprise licenses. I’m going to go ahead and build a simple data entry app for invoicing in just a few minutes. All you need to do to get to it in Teams is build a Power App. Now in my case, I already have installed Power Apps in Teams and have it pinned on the left rail. Now earlier I created the Distributors table and we’ll be using this in a minute. But I’ll create a brand new app by clicking new and then I’ll click app. And we’ll call it Northwind Invoices. And I’m going to click this button to create a new table. And then we’ll name it the same as the app. This is the visual table editor. You can see it’s out of the default column. I’m going to rename that to invoice ID. Now there’s a distributor that will be assigned to this invoice. You can see I have all these different data types to choose from. I’ll go ahead and add a lookup to the table named Distributors. Behind the scenes it’s also creating a relationship between my new table and distributors. So when I click into this column you’ll see the values from the other table. Now I’ll add a column for amount and we’ll use a decimal data type for that. And finally, we want to track the status. I’ll name this Invoice Status, and I’ll select the field type choice and enter three options. First I’ll do paid and I’ll associate a green color with it. Then outstanding with yellow and finally overdue with red. Now let me add a couple of rows of data. I’ll add an invoice ID. Distributor is Woodgrove. Amount of 25,000, and the invoice status equals paid. I’ll add another for Contoso for 100,000 and overdue. I’m going to close my table. Boom, look at that. Not only do I have my table but it’s automatically bound to the controls in the app. I can see what I just entered into the table and easily create a new record with the new record button. So with just a few minutes I created an app that can handle my invoices right within Teams.


 


– Now that seemed all pretty simple to set up and get all the data entry parts working. But what’s happening then under the covers?


 


– Yeah, behind the scenes when we created the table, it’s building out a table in SQL. Now when I did a lookup, it created a relationship for me between my Northwind invoices table and the distributors table automatically, and also behind each field is an entire security layer for authentication, with identity and access management. Now, if we go back into Teams and click build, then see all it’ll take me to my solution explorer. Now I’ll click into the tables, select the Northwind invoices just built, and you saw me build four columns. But if I change the view to all you’ll see both the ones I created and a set of system-generated columns. So with these additional columns you can see who created each record, when, when was it last modified, and much more. And so these columns are very useful when you want to build reports based on activities like how many records were entered in a time period or how many did a specific person enter? It also automatically built relationships between different tables. If I wanted to create additional ones, I can do that here. Many to one, one to many, or many to many.


 


– All this unlocks all the querying potential in your data, adding metadata that you might not have thought about all automatically. Of course you might be building the next killer app though that gets super popular in your company. So can we take a look at what the broader Dataverse service then would give you?


 


– Sure, and you can easily upgrade to the Dataverse service if that’s the case. To give you an idea of what you can gain, I’ll show you another scenario that’s more advanced than our last one. Here I have an order management system. You can see our app front-end for orders and behind it there is data for invoices, customers, employees, products, shippers, suppliers, orders, purchase orders, inventory transactions, and more. We also have more relationships behind the scenes. In fact, there are 152. For what we’re trying to do, you can see that the Dataverse service in this case is the right choice. This is because, going back to our three C’s, from a capacity perspective, we have a lot of images and we’re targeting more than a million orders. And so this exceeds the capacity of Dataverse for Teams. Then from a capability perspective, we want to add things like business rules, search capabilities for a bot that we’re building. And we also want to be able to run the app outside of Teams. And for control, we’re taking advantage of fine-grain security controls like custom role-based access to manage data.


 


– On that last point, if you want to learn more about what controls are available for security and governance with Dataverse and also the broader Power Platform, we’ve covered that on our recent shows with Charles Lamanna and also Julie Strauss that you can check out at aka.ms/PowerPlatformGovernance. So can you walk us through though the additional things that you can do with the Dataverse service?


 


– Sure. First thing, going back to the table there you can see we’ve got a lot more options in Dataverse: business rules, forms, dashboards, charts, keys, and data. Before I showed you columns and relationships but let’s take a look at business rules. You also want to be able to execute logic against your data. And we have a number of options in Dataverse, with business rules, for low-code no-code and capabilities like plugins for pro dev. But here let’s build a new business rule. In this case, we’ll create a simple one. If the amount is over 25,000, we’ll make shipping free. Now in the business rules designer, you can see that there is already a condition here. We’ll create a rule called Check Order Amount. And here we’re checking if the value of amount due to see if it is greater than 25,000. If it is, we’ll take an action to apply discount for our shipping and we’ll set the amount to zero. Then we’ll hit apply and now there’s a business rule.


 


– Now that logic by the way will be automated and keep running in the future. And you mentioned though that the app has more than a million rows or will have more than a million rows. So how easy is it then to bring in existing data that you might have already into Dataverse?


 


– Well, there are a lot of ways to bring data in and we’ll start with the most popular way, which is using Dataflows. I’ll create a new one and start from blank. I’ll call it Order import. And there are a ton of options here for both structured and unstructured data from common sources and types. You see file, database, Power Platform, Azure online services and others including things like SharePoint lists. In my case though, I’ll use a CSV. I’ll paste in the path of the file. You’ll see it parse the columns and everything looks good. So I’ll go ahead and hit transform data. From here, I can edit the table further, but I’ll keep it as is. And if you notice this is the original access data set from Northwind traders from 25 years ago. Still works though in our case so we’ll use it. We’ll go ahead and hit next. If I had an existing table I wanted to import the data into, I could specify it here. In my case, I’ll load it to a new table, give it a name Orders, and use the same display name, and then type in a description. It’s found the data types for each column and for anything it can’t match it will choose text, which can be changed if you want. I’ll keep what’s here and hit next. Here if the data is continuously refreshed, I can setup polling intervals and I’ll define when to start. Or I can go ahead and choose specific days and times. In my case, I’ll choose daily at 5:00 PM and hit create. So now the data’s imported and it will continue refreshing on my defined schedule. And while I showed options for importing, it’s important to know you can also use what’s called virtual tables to directly call your remote data without importing it or moving it. And you can learn more about them at aka.ms/virtualtablemechanics.


 


– Okay, so it looks pretty easy then to bring data in and create business rules. But you also mentioned there’s built-in search as part of Dataverse as well. How do you integrate then search as part of an app experience?


 


– So yes, Dataverse let’s you tap into Azure search under the covers. And you can use that to build search directly into your apps, even into your processes or bots. In fact, I’m going to show you search in the context of building a chatbot. To do that, we’ll use Power Virtual Agents. Now in our case, I’m building a simple bot to find people and contact them. The logic behind the bot in this case uses Power Automate. And the data in search will use Dataverse. Let’s hop over to our cloud flow and Power Automate. This starts with our bot input as a trigger and it’s passing in the value to result variable which will be used to store our results. And here in the search rows action, I’m passing it by search input, which would be a person’s name. And in our case, we can leave the default until we get the table filter items. We’ll only focus on tables where we can find people, like customer, account, employee and contact. The we’ll loop through the list of rows that come back from the search. We’ll parse each of them. And we’ll add them to the value that will be returning to the bot. Now because we’re building a results table, we need to define the column headers for that response. If I add current row from search results, these are all the values we want to display in our table. You’ll see search, score, full name, email, and telephone. Finally, we’ll pass those values back to our bot. Let’s try it out. I’ll initiate the conversation with help me find someone. It’s going to ask me what’s the name of the person you’re looking for? And I’m going to type Helen Wilcox. Now it’s running our cloud flow and it returns two results. Helena Wilcox and Helen Garrett. The score is higher as you’ll see for Helena even though I spelled her name wrong. And from here, I can even use the fields in the table to email or call her. See if I click here, it’ll even start a Teams call. And that’s how easy it is to include search into chatbots, Power Apps, and even your custom code using a flow.


 


– Great, so now you’ve shown us how you can actually progress from an app that’s built in Teams to a robust business system and have Dataverse manage the entire backend, in addition to supporting some enterprise capabilities like search, all with low- to zero-code. But how would you then use Dataverse if you’re a professional developer?


 


– So as a professional developer, Dataverse, as I mentioned earlier, gives you a single API to connect to the service. This API can also be extended with your own methods. This is useful, for example, if you’re in the retail industry and you might want to expose a reusable function for something like calculating tax, and you can do this without having to create or host your own API. Dataverse also gives you an event pipeline. You can use what we call plugins to insert custom C# code before and after an operation is performed. And we’ve got some great code samples to help you get started. You can find those at aka.ms/DataversePlugins. Now there are four places in the pipeline where you can use plugins. We first receive an API request. Validation handlers can throw custom exceptions to reject specific operations, such as rejecting incorrectly formatted information like an email address. Then before the data operation is executed, pre-operation handlers can apply logic, such as applying a discount based on the properties of an order. Post-operation handlers can modify responses or take action after the operation, such as communicating with the web service for our fulfillment system. And then you have async handlers and they can perform automation after the response is returned. For example, it can send an order confirmation to the customer. Everything we talked about happens in the context of a pipeline but you may want to do some things asynchronously. Dataverse also supports event integration with Service bus, Event hub, and Webhooks to integrate with any app you may have. You can trigger Azure functions which supports many of the most popular enterprise languages. So you could do that same web service call to our fulfillment system using Java.


 


– Awesome stuff. And it really looks like a great data backend really for any app and any developer without the need to architect and implement and manage it. So if you’re new to Dataverse though, what’s the best way to get started?


 


– I would recommend that you start building Power Apps, whether that’s in Teams or right from Power Apps Studio. If your app grows and you started in Teams you can easily upgrade to Dataverse. We’ve also got a ton of guidance and tutorials available for you. You can find those at aka.ms/DataverseMechanics.


 


– Thanks so much for joining us today, Marc. And by the way, keep watching Microsoft Mechanics for all the latest updates. Subscribe, if you haven’t yet. And we’ll see you soon.