Failed to subscribe to storage events for event trigger. Which permission is missing?

This article is contributed. See the original author and article here.

A quick post about the error and fix. The customer tried to create a trigger activation and it fails.


Error:


Trigger activation failed for Trigger 1.
Failed to subscribe to storage events for event trigger: Trigger 1


 


The customer had the correct permissions he was the owner of the workspace as the matter of fact he was admin.


He also had a storage blob data contributor. So it seems everything was in place. Even though I checked the logs regards this failure and I saw there was a permission error.


 


Here it goes why:


https://docs.microsoft.com/en-us/azure/data-factory/how-to-create-event-trigger


 


“The integration described in this article depends on Azure Event Grid. Make sure that your subscription is registered with the Event Grid resource provider. For more info, see Resource providers and types. You must be able to do the Microsoft.EventGrid/eventSubscriptions/* action. This action is part of the EventGrid EventSubscription Contributor built-in role.”


Options are:



  • Owner permission on the storage

  • Eventgrid RBAC subscription contributor and reader on the Subscription 


 


That is it!


Liliam UK Engineer


 


 


 

Dodo Pizza – Let's data take the lead

Dodo Pizza – Let's data take the lead

This article is contributed. See the original author and article here.

Dodo Pizza infographic.jpg



Founded in 2011, Russian pizza franchise Dodo Pizza is one of Europe’s fastest-growing restaurant chains. “We are digitally transforming global pizza delivery,” tells 
Gleb Lesnikov, Cloud Architect at Dodo Pizza. “From our 600-plus locations, we generate a lot of data every single day. And we need to be able to interactively query that data, to explore it and gather insights. That’s what Azure Data Explorer has allowed us to do—for any kind of data, unstructured or structured. We’re ingesting 1TB per day now. We also spend less time operating the cluster because it’s automatic. It’s a night and day difference.” 


Read more 


 


 

VM Subscription Limits and Regional Capacity

VM Subscription Limits and Regional Capacity

This article is contributed. See the original author and article here.

Understanding your subscription’s VM limits and regional VM capacity is important for 3 main scenarios: 



  1. You are planning to set up a large number of VMs across your labs.

  2. You are planning to use GPUs. 

  3. You need to peer your lab account to a virtual network (VNet)for example, to access a licensing server.


If one of the above scenarios applies to youwe recommend that you open a Support ticket to pre-request capacity.  By pre-requesting capacity, you can:



  • Ensure that your Azure subscription’s capacity limit for Azure Lab Services allows for the number of VMs and the VM size that you plan to use in your labs.  All Azure subscriptions have an initial capacity limit that restricts how many VMs you can create inside your labs before you need to request for a limit increase.  Read the following article to learn more: Capacity limits in Azure Lab Services. 

  • Create your lab account within a region that has sufficient VM capacity based on the number of VMs and the VM size you plan to use in your labs.  This is especially important if you need to peer your lab account to a VNet because both your lab account and VNet must be located in the same region.  It’s important to pick a region that has sufficient capacity before you set this up. 


In this post, we’ll look closer at the process for ensuring that there is sufficient regional capacity for your labs. 


Problem


When your lab account is peered to a VNet, the location of your lab account and VNet determines the region where your labs are created In the lab creation wizard, only VM sizes that have capacity in this region are shown in the list of available sizes.  You may notice that you have the option to show unavailable sizes, but you are prevented from choosing these. 


AvailableSizes.png


You have more flexibility to find available capacity when your lab account is not peered to a VNet.  In this case, Azure Lab Services automatically looks for available VM capacity across all regions in the same geography as the lab account.  However, you still may not be able to choose a VM size if none of the regions have available capacity.  For example, currently in CanadaGPU sizes (e.g. NV series) are not offered in any regions.  As a result, you must create your lab account in a geography that does have GPUs available. 


 


You also can configure a setting called enable location selection (this setting is only available when your lab account is not peered to a VNet).  This setting allows lab creators to choose a different geography from the lab account when they create a lab.  Enabling this option gives lab creators the greatest flexibility to find a region that has available capacity for a VM size. 


 


Regardless if you are using VNet peering or not, you can still run into unexpected capacity issues later.  For example, when creating additional labs or increasing your lab’s VM pool size. 


Solution


We recommend the following process to ensure that you pick a location that has sufficient capacity before you create your lab account and peer to a VNet: 



  1. Refer to the below link which shows VM sizes that are supported by each region. 


  2. Refer to the following link that shows the VM series that correlates with each VM size: 


  3. Open a support ticket to request and reserve VM capacity for your labs.  When you log a support ticket, please include the following information: 

    • Subscription id 

    • LocationRegion 

    • Estimated number of labs 

    • VM size for each lab 

    • Estimated number of VM in each lab 

    • Brief class descriptions for each lab 




If you have any questions on this process, please reach out to us on the forums.


Additional Resources


Refer to the following help topics that provide more details on how regionslocations are configured for a lab:



Thanks,


Your Azure Lab Services Team

Microsoft 365 & SharePoint PnP Weekly – Episode 103

Microsoft 365 & SharePoint PnP Weekly – Episode 103

This article is contributed. See the original author and article here.

pnp-weekly-103-promo.png


In this installment of the weekly discussion revolving around the latest news and topics on Microsoft 365, hosts – Vesa Juvonen (Microsoft) | @vesajuvonen, Waldek Mastykarz (Microsoft) | @waldekm, are joined by Darrel Miller (Microsoft) | @darrel_miller, Developer, Evangelist, and API Architect on the Microsoft Graph (Developer Experience) team.  The team that creates developer tooling – Graph Explorer, Graph SDK, Documentation, and the API Review Board that helps other Microsoft 365 teams (approx. 50) expose (with consistency) their APIs in Microsoft Graph.


 


Discussion on challenges getting developers to use the APIs, the _v2 property, evolution of the SDK, Microsoft Identity Web.MicrosoftGraph, auto-generated code, API surface quality control, and the Graph “no breaking change policy.”   Microsoft Graph’s fundamental mission is making life easier for developers by rigorously coordinating consistency, non-duplication, and usage of the API surface by both Microsoft and partner developers.  Coverage on 17 recently released articles and videos from Microsoft and the PnP Community are highlighted as well. 


 


This episode was recorded on Monday, November 2, 2020.


 



 


Did we miss your article? Please use #PnPWeekly hashtag in the Twitter for letting us know the content which you have created. 


 


As always, if you need help on an issue, want to share a discovery, or just want to say: “Job well done”, please reach out to Vesa, to Waldek or to your PnP Community.


 


Sharing is caring!

How to Create a No Code AI App with Azure Cognitive Services and Power Apps

How to Create a No Code AI App with Azure Cognitive Services and Power Apps

This article is contributed. See the original author and article here.











 


This article explains what is Power Platform, as well as go through a step by step process to create an application that detects objects from photos using Power Apps and AI Builder. Check out the video below to see the app we will build to detect different Mixed Reality Headsets such as HoloLens version 1 and 2 Augmented Reality and Virtual Reality headsets and their hand controllers.


 


 



 


 


What is Power Platform?


 


Power Platform is a set of tools, API‘s and SDK‘s that helps you analyze your data and build automations, applications and virtual agents with or without having to write any code.


 


powerPlatform.png


 


 


What are Power Apps?


 


Power Apps is a set of tools that allows you to create applications with a drag and drop UI and easy integration of your data and 3rd party APIs through connectors.


 


A connector is a proxy or a wrapper around an API that allows the underlying service to talk to Microsoft Power Automate, Microsoft Power Apps, and Azure Logic Apps. It provides a way for users to connect their accounts and leverage a set of pre-built actions and triggers to build their apps and workflows. For example, you can use Twitter connector to get tweet data and visualize it in a dashboard or use Twilio connector to send your users text messages without having to be an expert in Twitter or Twilio APIs or having to write a line of code.


 


Check out the list of connectors for Power Apps to see all the APIs that are available. Notice Power Automate or Logic App connectors might not be the same.

 


 


What is AI Builder?


 


AI Builder is one of the additional features of Power Apps. With AI Builder, you can add intelligence to your apps even if you have no coding or data science skills.


 


aiBuilderAppView.png


 


 



 
















 


Is AI Builder the right choice?


 

















 


Can I use Power Apps and AI Builder for production?


 


Yes you can. As any tool that does things magically, AI Builder in Power Apps comes with a cost. That does not mean you can’t try your ideas out for free.


 


 


What will my production app cost?


 


If you want to go to production with Power Apps, it is a good idea to consider the costs. Thankfully there is an app for that. AI Builder Calculator let’s you input what AI tools you will need and how many users will be accessing your app’s AI features and gives you the price it will cost you.


 


aiBuilderCalculate.png


 


What are preview features?


 
















 


AIBuilderPreview.png


 


What is Object Detection?


 


AI Builder Object detection is an AI model that you can train to detect objects in pictures. AI models usually require that you provide samples of data to train before you are able to perform predictions. Prebuilt models are pre-trained by using a set of samples that are provided by Microsoft, so they are instantly ready to be used in predictions.


 


testResultSmall.gif


 


 Object detection can detect up to 500 different objects in a single model and support JPG, PNG, BMP image format or photos through the Power Apps control.


 


How to try out Object Detection capabilities?


 


You can try out and see how object detection works before having to create and accounts or apps yourself on the Azure Computer Vision page.


 


seeItinAction.png


 


 


What can you do with Object Detection?


 





  • Object counting and inventory management






  • Brand logo recognition






  • Wildlife animal recognition





 


How to detect objects from images?


 



  • To start creating your AI model for your app, sign in to Power Apps and click on AI Builder on the left hand menu. Select Object Detection from the “Refine Model for your business needs” option.


 


buildAI.png


 


 




  •  Name your new AI model with a unique name. Select Common Objects and proceed to next section.




 


commonObj.png


 


 



  • Name the objects that you are going to detect.


 


 


namedObjects.png


 



  • Upload images that contain the object you will detect. To start with you can upload 15 images for each object.


 


imageDetectionFormat.png


 


 



  • Make sure each object has approximately the same amount of images tagged. If you have more examples of one object, the training data will be likely to detect that object when it is not.

  • Tag your objects by selecting a square that your object is in and choosing the name of the object.


 


 


tagging.png


 


 



  • Once you are done choose Done Tagging and Train. Training process will take some time.

  • If you choose to not use an image or clear any tags, you can do that at any time by going back to your model under the AI Builder on the left hand side menu and choose your model and choose edit.


 


dontUseImage.png


 


 



  • AI Builder will give you a Performance score over 100 and a way to quickly test your model before publishing. You can edit your models and retrain to improve your performance. Next section will give you some best practices to improve your performance.


 


performance.png


 


How to improve Model performance?