Experiencing Data Access issue in Azure Portal for Many Data Types – 03/21 – Resolved

This article is contributed. See the original author and article here.

Final Update: Monday, 21 March 2022 17:00 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 3/21, 16:30 UTC. Our logs show the incident started on 3/21, 15:40 UTC and that during the 50 minutes that it took to resolve the issue, customers with Application Insights resources in Australia Southeast region may have experienced intermittent metric data gaps and incorrect alert activation
  • Root Cause: The failure was due to a backend dependency.
  • Incident Timeline: 50 minutes – 3/21, 15:40 UTC through 3/21, 16:30 UTC
We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Eric Singleton

Get unparalleled value with Dynamics 365 Marketing 2022 release wave 1

Get unparalleled value with Dynamics 365 Marketing 2022 release wave 1

This article is contributed. See the original author and article here.

I am thrilled to share an overview of the exciting new capabilities coming in the 2022 release wave 1 for Microsoft Dynamics 365 Marketing, and more importantly, the value that these new capabilities will bring you because we’ve tailored it to meet, and exceed, the needs of today’s marketers and their customers.

Marketing has always been a team sport and is even more so in today’s landscape.

Marketers are being chartered to think about the entire customer journey and to champion across the organization to elevate end-to-end customer experiences. They’re asked to be more agile in responding to market conditions, to be more efficient in content creation, and to deliver it all with higher quality, greater impact, and proof that they’re delivering business impact. With the upcoming release wave, Dynamics 365 Marketing can help you meet every one of these challenges head-on.

It will give you the flexibility like never before to easily collaborate across people, departments, and applications, to personalize customer interactions across the entire customer journey, and to design custom experiences to happen in the moments that matter, thanks to real-time customer journey orchestration.

Here are a few highlights of some of the exciting new innovations that will be released for general availability over the next few months and what they can mean for business users and marketers alike.

Easily collaborate to leverage the full power of your team

Marketing teams need to respond to market conditions quickly in order to create impactful moments. This has gotten more complicated with hybrid workplace scenarios and the fact that more and more marketers are engaging with colleagues in other departments to land the right engagement across the end-to-end customer journey. To keep these cross-functional teams (and frankly all of us) productive, we must avoid context switching, and instead, keep people in the flow of work.

With this release, we bring Microsoft Teams chat natively into Dynamics 365 Marketing to allow you to easily collaborate with your team without leaving the flow of work. Because each Teams chat is tied to the individual work item, it enables everyone involved with the project to have the same context and increases focus so they can create the highest-impact journeys, emails, and other content more efficiently.

Dynamics 365 Marketing in the email design mode showing the integrated Teams chat window while collaborating on an email banner

For Dynamics 365 Sales and Dynamics 365 Customer Service users, the new unified customer activity timeline will bring cohesiveness across the end-to-end customer journey. The timeline is interwoven with activities from these Dynamics 365 customer engagement apps, so you don’t need to leave your flow of work to view your Dynamics 365 Marketing real-time marketing interactions. With the full picture of engagement history at their fingertips, customer-facing teams can better create impactful journeys, plus easily search or filter to find the relevant interaction history.

And to offer the ultimate in flexibility, in addition to the comprehensive list of out-of-the-box triggers, you can now trigger journeys or measure goals from common customer data changes across any Dynamics 365 customer data. Each interaction, such as a completed application, a renewed contract, or a closed service ticket, represents an opportunity to engage more deeply with your customers. Those interactions can now be used to kick-off, drive, and measure customer journeys in just a few clicks, without requiring developers or customizations.

Hyper personalize experiences with ease

Personalization is no longer a luxury. It is critical to increase the likelihood that customers will make an initial purchase, repurchase, and recommend your products or services to others. Personalization drives impact. Companies with significant growth generally have a higher usage of personalized marketing.

One of the first steps in personalization is to create compelling content that will resonate with your customers. To make this easier, we are introducing reusable content fragments to allow your team to quickly assemble beautiful emails with just a few clicks. These content fragments can include layout, advanced elements, and personalization, so there is no limit on the content you can create and save for reuse. With a library of reusable content, you can simply mix and match different pieces to create your messages.

Next, personalize your emails to delight your customers. Personalization within the email editor is also faster and easier than ever before. You can use pre-defined dynamic text to personalize your emails without the need to know the underlying data model to do it.

Dynamics 365 Marketing screen showing easy email personalization with dynamic text options

You can also personalize and customize your emails by creating conditional content with an easy-to-use, no-code experience. Use any mix of personalized text, images, and layouts, and create rules for selecting the right content, using segment membership, attributes, or triggers. For instance, members of the Contoso Winery Red Wine Club will receive the images/text/layout for red wines, while members of the White Wine Club will receive those pertaining to white wines instead. You will also get detailed analytics to help monitor how each variation is performing.

Build personalization into customer-centric journeys that can boost your return on investment (ROI) with the help of AI-powered next best content. In addition to allowing you to specify manual rules for content/offer selections, you can leverage AI-based optimization to tailor content/offers to match the preferences, interests, and motivations of each of your unique customers. After all, compelling content and offers are powerful tools in getting customers to engage or keeping them from churning.

And for Dynamics 365 Customer Insights users, when it comes to getting your message out to the right customers, you can choose to use customer insights data from your own Azure Data Lake Storage to build segments, specify conditions, and personalize messages to build rich, real-time journeys while maintaining full control over the underlying data.

Design custom experiences with flexibility and agility

Customer expectations of today demand that businesses rise to a new level of customer obsession to meet them. Winning and keeping customers is becoming increasingly more competitive. Brands must provide the best experience possible when customers engage. This is why we continue to invest in building out more and more real-time engagement capabilities into our Dynamics 365 applications.

In the last Dynamics 365 Marketing release, we introduced native support for SMS providers Twillio and Telesign to enable organizations to connect around the world via SMS. Capabilities in this release enable you to continue the conversation with your customers by taking action on SMS keyword replies. You will be able to create custom keywords for use in SMS replies that will guide the journey based on the response to ensure flexibility and agility are built into your customer experiences.

Additional capabilities allow you to trigger journeys more selectively based on qualifying conditions to better drive customers to action. We eliminate the need to create cascading branches or specifying conditions with more than two possibilities. For instance, to re-engage buyers who have abandoned carts, you may want to qualify buyers who have more than $100 in their cart for a journey that gives a $10 coupon incentive to complete their purchase. This empowers your marketing team to engage the right audience in the most impactful way, and to have more flexibility by re-using the same event trigger for a variety of scenarios. These capabilities increase a marketer’s productivity by simplifying, not only the journey logic and creation, but also the analysis when journeys are live as well as the readability and maintenance of more complex journeys.

And finally, if you can dream it, you can do it. With a seamless integration between real-time marketing journey orchestration and Power Automate, we enable you to have limitless customization possibilities and fine-grained control of your customer experience by leveraging a wealth of connectors and actions. This allows semi-technical marketers to automate journey steps quickly and easily leaving deep technical resources to be more focused on complex needs.

Leading the next generation of business applications

We are excited to be delivering so much value in the 2022 release wave 1 for Dynamics 365 Marketing, value that will enable you to elevate end-to-end customer experiences, be more agile in responding to market conditions, be more efficient in content creation, and to deliver it all with higher quality and greater impact.

The capabilities highlighted above are planned to be released from April 2022 through September 2022. Consult the release notes for the most up-to-date details.  

I’m excited to take you on this exciting journey with usas always, there will be more to come.

Learn more about Dynamics 365 Marketing

To learn more about how your organization can elevate your customer experiences, visit the Dynamics 365 Marketing webpage and sign up for a free Dynamics 365 Marketing trial to explore real-time customer journey orchestration and the other rich capabilities offered in Dynamics 365 Marketing.

And join us at the Microsoft Business Applications Launch Event on April 6, 2022, for an inside look at what’s new across Dynamics 365 and Microsoft Power Platform.

The post Get unparalleled value with Dynamics 365 Marketing 2022 release wave 1 appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

IE 11 Retirement: What Does This Mean for Microsoft Access Apps?

IE 11 Retirement: What Does This Mean for Microsoft Access Apps?

This article is contributed. See the original author and article here.

Greetings Access Community, I had a great question from Klaus Oberdalhoff in the Denver User Group last month, as well as numerous questions from the community on IE11 and its impending retirement. I wanted post this to help clear the air on these concerns.


IE 11 to Edge.png


Here is a quick set of Q&A we have put together on many of the questions we have seen across our MVPs and community.


 


Q&A on IE 11 retirement


 


Q: Is it true according to Microsoft, the old internet browser and thus the web browser control will soon be disabled / disconnected?


A: No this is not the case. The Webview (trident) control is still being supported as part of Windows. The MSHTML (Trident) engine is the underlying platform for Internet Explorer 11. This is the same engine used by IE mode and it will continue to be supported (in other words, unaffected by this announcement). Both WebOC and the MSHTA app will continue to be supported as they rely on the MSHTML engine which is unaffected by this announcement. If you have a custom or third-party app that relies on the MSHTML platform, you can expect it to continue to work. For future app development, we recommend using WebView2.


 


We recommend you review this detailed Q&A on the IE 11 Desktop App Retirement


 


Q: Does that mean that the existing Access applications that use the IE11 (Trident) web browser control no longer work?


A: No this does not mean existing applications no longer work. Existing applications will continue to be able to run using the MSHTML engine that is still supported


 


Q: Does this mean Access applications will be stranded without a new web browser control in place for Access using Webview2 aka Anaheim?


A: No. The MSHTML (Trident) engine is still supported until 2029. Eventually we will have a new browser control in place that supports Webview2.


 


Q: How long do I have before I have to worry about updating my Access applications to support webview2


A: You have until 2029 before the MSHTML (Trident) engine no longer supported. But once we have a new forms control supporting the new Webview2 browser control for Access we recommend all developers move to that in line with Microsoft guidelines to ensure your experience compliance with modern web standards and security.


 


Q: When are you going to have a new Access browser control that supports Edge and Webview 2?


A: We are planning and specing this work now working with teams across Office Platforms to enable a totally new browser control with Webview 2 support. Our hope is to have this done by the end of the 2022 calendar year or beginning of 2023. When this work is completed custom or 3rd party apps will be able to use either the legacy or new control. Once we release, we will recommend all apps move to the Webview2 control to ensure the most up to date technology and security.


 


We would love to get your feedback on what you need here and use cases you see we need to address in our new forms control as we are refining the specs now for this work.


 


Please send your feedback with the name “New Browser Control Feature Request” in the description title at Microsoft Q&A using the office-access-dev tag


 

The Data Lakehouse, the Data Warehouse and a Modern Data platform architecture

The Data Lakehouse, the Data Warehouse and a Modern Data platform architecture

This article is contributed. See the original author and article here.

I am encountering two overriding themes when talking to data architects today about their data and analytics strategy – which take very different sides, practically at the extreme ends of the discussion about the future design of the data platform.



  1. The Data Lakehouse. The focus here is how traditional Data Lakes have now advanced so that the capabilities previously provided by the Data Warehouse can now be replicated within the Data Lake. The Data Lakehouse approach proposes using data structures and data management features in a data lake that are similar to those previously found in a data warehouse:


Databricks - What is a data lakehouseDatabricks – What is a data lakehouse



  1. Snowflake as your data platform. Snowflake has quickly become a major player in the data warehousing market, making use of its cloud native architecture to drive market share. They have taken this a step further now though and are now pushing the concept of “Make Snowflake Your Data Lake”


Snowflake for Data LakesSnowflake for Data Lakes


 


So on one-hand, the Data Lakehouse advocates says “There is no longer a need for a relational database, do it all in the data lake”, while Snowflake is saying “Build your data lake in a relational database”. Is there really such a stark divergence of views about how to architect a modern data platform?


 


While both of these architectures have some merit, a number of questions immediately spring to mind. Both of these are driven with a focus on a single technology – which immediately should ring alarm bells for any architect. Both concepts also bring baggage from the past:



  • the Data Lakehouse pitch feels uncomfortably close to the “Hadoop can do it all” hype from 10 years ago, which led to vast sums being spent by organisations jumping onto this big data bandwagon; they believed the hype, invested huge amount of money into this wonder platform, only to find that it wasn’t as effective as it promised and that many of the problems with the “data warehouse” were actually due to their processes and governance that were simply replicated in the new technology.

  • some of the Snowflake marketing seems to be morphing into similar concepts of the Enterprise Data Warehouse vendors of 20-30 years ago – the concept of a single data repository and technology being all you need for all your enterprise data needs – which follows a very legacy logical architecture for a product that so heavily hypes its modern physical architecture.


So how do we make sense of these competing patterns? Why is there such a big disparity between two approaches, and is there really such a major decision needed between open (spark/delta) v proprietary code (snowflake/relational) bases and repositories ? I believe that if you drill into the headline propositions, the reality is that any architecture isn’t an “either/or” but a “better together” and that a pragmatic approach should be taken. As such, whenever starting any conversation today, I tend to lead with three areas of assessment:



  1. What data do you have and what are your big data, BI and advanced analytical requirements? An organisation that requires mainly machine learning and anomaly detection against semi-structured data requires a very different approach to one that has more traditional BI and next best action needs driven from structured data. Also consider what works well for your data; if it is mostly structured and sourced from relational systems, why not keep it that way rather than putting it into a semi-structured form in a Lake and then layering structures back over the top; alternatively for semi-structured or constantly changing data, why force this into a relational environment that wasn’t designed for this type of data and which then requires the data to be exported out to the compute?

  2. What skills base do you have in IT and the business? If your workforce are relational experts and have great SQL skills, it could be a big shift for them to become Spark developers; alternatively if your key resources are teams of data scientists used to working in their tools of choice, they are unlikely to embrace a relational engine and will end up exporting all the data back out into their preferred environments.

  3. Azure – and any modern cloud ecosystem – is extremely flexible, it redefines the way modern compute architectures work by completely disconnecting compute and storage and provides the ability to build processes that use the right tool for the right job on a pay for what you use basis. The benefits are huge – workloads can be run much faster, more effectively and at massively reduced costs compared to “traditional” architectures, but it requires a real paradigm shift in thinking from IT architects and developers to think about using the right technology for the job and not just following their tried and tested approaches in one technology.


The responses to these 3 areas, especially 1 and 2, should determine the direction of any data platform architecture for your business. The concepts from item 3 should be front and centre for all architects and data platform decision makers though, as getting the best from your cloud investment requires new ways of thinking. What surprises me most today is that many people seem reticent to change their thinking to take advantage of these capabilities – often through a combination of not understanding what is possible, harking back to what they know, and of certain technology providers pushing the concept of “why do you need this complexity when you can do everything in one (our) tool”. While using multiple tools and technologies may seem like adding complexity if they don’t work well together, the capabilities of a well-integrated ecosystem will usually be easier to use and manage than trying to bend a single technology to do everything.


 


Why does Microsoft propose Azure Synapse Analytics in this area? We believe that this hybrid approach is the right way forward – that enabling efficient and effective BI, Analytics, ML and AI is possible when all your data assets are connected and managed in a cohesive fashion. A true Enterprise Data platform architecture enables better decisions and transformative processes, enabling a digital feedback loop within your organization and provide the foundation for successful analytics. One constant area of feedback we received from customers though was that while building a modern data platform was the right strategy, they wanted it to be easier to implement. IT architects and developers wanted to spend less time worrying about the plumbing – integrating the components, getting them to talk to each other – and more time building the solution. We thus set out to rearchitect and create the next generation of query processing and data management with Synapse to meet the needs of the modern, high scale, volume, velocity, and variety of data workloads. As opposed to limiting customers only to one engine, Synapse provides SQL, Spark, and Log Analytics engines within a single integrated development environment, a cloud-native analytics service engine that converges big data and data warehousing to achieve limitless scale on structured, semi-structured, and un-structured data. Purpose built engines optimized for different scenarios enable customers to yield more insights faster and with fewer resources and less cost.


Azure Synapse AnalyticsAzure Synapse Analytics


 


Azure Synapse Analytics is a limitless analytics service with a unified experience to ingest, explore, prepare, manage and serve data for immediate BI and machine-learning needs. So Azure Synapse Analytics isn’t a single technology, but an integrated combination of the different tools and capabilities you need to build your modern data platform, allowing you to choose the right tool for each job/step/process while removing the complexity of integrating these tools.


 


While Synapse can provide this flexible modern data platform architecture in a single service, the concept is open. Synapse provides Spark and dedicated SQL pool engines, but alternatively Databricks and Snowflake could replace these components within this architecture. Alternatively any combination of Synapse, other first-party, third-party, or open-source components can be used to create the modern data platform, the vast majority of which are supported within Azure.


 


This open combination of individual technologies should be combined within a Modern Data platform architecture to give you the ability to build the right modern data platform for your business. Take advantage of the flexibility of Azure and use the best tools and techniques to construct the most effective data platform for your business.

Assign a built-in role to a user at resource and Resource Group scope using ARM template

Assign a built-in role to a user at resource and Resource Group scope using ARM template

This article is contributed. See the original author and article here.

This article is focused on creating an ARM template which will create a storage account resource in the resource group and will assign role at both RG (Resource Group) scope and created storage account resource level


This article is divided into following 5 sections.



  1. Fetch User Object ID

  2. Fetch Built-in Role ID

  3. Create ARM template to provision storage account

  4. Role assignment in ARM template

  5. Deploying ARM template to Azure Portal


 


Let’s start step by step as mentioned above, we will fetch the user object ID which will be used in deploying ARM template



  1. So firstly, lets fetch the user’s object id


Use the PowerShell script to fetch user’s object id by its email id.


PS Script: Get-AzADUser | Where-Object { $_.UserPrincipalName -eq “testuser@testdomain.xyz.com” }


This will show the user details like, DisplayName, Id, Mail, UserPrincipalName, Grab the Id and save it for further use


You can also fetch the user object Id from Azure Portal, Navigate to Azure Active Director > Users > Select the user you want to fetch the Id of > Copy the Object Id


 



  1. Similarly, we will fetch the built-in role Id using PowerShell script, for this article I will fetch the “Reader” role id but you can fetch your required role id,


PS Script: Get-AzRoleDefinition -Name Reader


This script will output few of the Role details, grab the Id from the output and save it for further use


 



  1. Now it’s time to create the ARM Template which will create the Storage account and assign user with Reader role to the created storage account also, we will assign user with Reader role to the Resource group using scope.


 


Follow the template mentioned below for creating storage account and role assignment.


Refer Microsoft documentation to know more on ARM Template syntax and details and to know more details on role assignment


 


{


    “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#“,


    “contentVersion”: “1.0.0.0”,


    “parameters”: {


        “AAD_Object_ID”: {


            “metadata”: {


                “description”: “Object ID of the User, Group or Service Principal”


            },


            “type”: “string”


        },


        “Role_Definition_ID”: {


            “metadata”: {


                “description”: “Identifier (GUID) of the role definition to map to service principal”


            },


            “type”: “string”


        }


    },


    “variables”: {


        “Full Role_Definition_ID”: “[concat(‘/subscriptions/’, subscription().subscriptionId, ‘/providers/Microsoft.Authorization/roleDefinitions/’, parameters(‘Role_Definition_ID’))]”,


        “StorageAccountName”: “shrstrgacc”,


        “StorageAccountAssignmentName”: “[concat(variables(‘StorageAccountName’), ‘/Microsoft.Authorization/’, guid(concat(resourceGroup().id), variables(‘Full Role_Definition_ID’)))]”                                


    },


    “resources”: [


        {


            “type”: “Microsoft.Storage/storageAccounts”,


            “apiVersion”: “2018-07-01”,


            “name”: “[variables(‘StorageAccountName’)]”,


            “comments”: “Storage account used to store VM disks”,


            “location”: “[resourceGroup().location]”,


            “sku”: {


                “name”: “Standard_LRS”


            },


            “kind”: “Storage”,


            “properties”: {


                “roleDefinitionId”: “[variables(‘Full Role_Definition_ID’)]”,


                “principalId”: “[parameters(‘AAD_Object_ID’)]”


            }


        },


        {


            “type”: “Microsoft.Authorization/roleAssignments”,


            “apiVersion”: “2017-09-01”,


            “name”: “[guid(concat(resourceGroup().id), resourceId(‘Microsoft.Storage/storageAccounts’, ‘shrstrgacc’), variables(‘Full Role_Definition_ID’))]”,


            “dependsOn”: [


                “[resourceId(‘Microsoft.Storage/storageAccounts’, ‘shrstrgacc’)]”


            ],


            “properties”: {


                “roleDefinitionId”: “[variables(‘Full Role_Definition_ID’)]”,


                “principalId”: “[parameters(‘AAD_Object_ID’)]”,


                “scope”: “[resourceGroup().id]”


            }


        },


        {


            “type”: “Microsoft.Storage/storageAccounts/providers/roleAssignments”,


            “apiVersion”: “2017-05-01”,


            “name”: “[variables(‘StorageAccountAssignmentName’)]”,


            “dependsOn”: [


                “[resourceId(‘Microsoft.Storage/storageAccounts’, ‘shrstrgacc’)]”


            ],


            “properties”: {


                “roleDefinitionId”: “[variables(‘Full Role_Definition_ID’)]”,


                “principalId”: “[parameters(‘AAD_Object_ID’)]”


            }


        }


    ],


    “outputs”: {}


}


 


As you can see from the above ARM template, we have given 2 input parameters which are, “AAD_Object_ID” & “Role_Definition_ID”, so to give a brief about what this input parameter will hold, AAD_Object_ID will be the User object Id fetched from Step 1 and Role_Definitation_ID will be the built in Reader Role ID fetched from Step 2


 


To further drill down to the ARM Template resources, we will be using;


Type: Microsoft.Storage/storageAccounts to provision storage account with the mentioned properties in the ARM Template


Type: Microsoft.Authorization/roleAssignments to assign role at Resource group scope


Type: Microsoft.Storage/storageAccounts/providers/roleAssignments to assign role to the storage account resource


Also, save the above mentioned template code in a file with  .json extension for example armtest.json and copy the file path as we will need it while deploying it to Azure in the final step


 



  1. Now it’s the time to deploy ARM Template to Azure Portal use the following script


#Connect to Azure Account


Connect Az-Account


 


# Use PowerShell command New-AzResourceGroupDeployment, this command deploys azure resources to the Resource group


Refer, Microsoft documentation on deploying using New-AzResourceGroupDeployment


 


New-AzResourceGroupDeployment -ResourceGroupName <your-   resource-group-name>`


-TemplateFile <ARMTemplateFilePath > `


-AAD_Object_ID <user object Id> `


-Role_Definition_ID <Built in Reader role Id>


 


Note – Pass the copied path of the saved ARM Template file to the TemplateFile parameter in the script


 


Now it’s time to verify the outcome in the Azure Portal,


Wohoo, Storage is created in the Resource group mentioned in the New- AzResourceGroupDeployment


 


ShrushtiShah_2-1647592558283.png


Fig 1.1: Storage Account created using ARM Template


 


Now, Lets check if the Reader role to the testuser is assigned to the Resource Group


Navigate to Azure Portal > Resource Group > Select the Resource group you added in the ARM deployment script > Access Control > Role Assignments


Wohoo, we can see the Reader role to the test user is assigned access to the Resource Group scope


 


ShrushtiShah_3-1647592558288.png


Fig 1.2: Role Assignment to the Resource Group using ARM Template


 


It’s time to verify the role access at the storage account resource level,


Navigate to Azure Portal > Resource Group > Select the Resource group you added in the ARM deployment script > Select the created storage account > Access control > Role Assignments


Wohoo, at storage account level we can see the reader role is assigned to the test user and the same is inherited from the Resource Group.


 


ShrushtiShah_4-1647592558293.png


Fig 1.3: Role assigned to created storage account using ARM Template


 


I hope this article seems useful for all the Azure enthusiasts on how they can assign RBAC to the users/groups/SPNs/Managed Identities using ARM Template.


Keep Learning!


Keep Sharing!