This article is contributed. See the original author and article here.
Context
Many ISVs (Independent Software Vendors) are exchanging information with devices, applications, services, or humans. In many cases the information passed can be a file or a blob. Each of these ISVs would need to implement such a service. In the past few months, I discussed this capability with several (over 4) different customers, each with slightly unique needs. When I tried to generalize the need, it was clear: they wanted a quick, safe means to exchange files with customers or devices.
So, I tried to translate these asks into user stories:
As a service provider, I need my customers to upload content in a secure, easy-to-maintain micro-service exposed as an API (application programming interfaces) so that I can process the content uploaded and perform an action on it.
As a service provider, I would like to enable the download of specific files for authorized devices or humans so that they could download the content directly from storage.
As a service provider, I would like to offer my customers the ability to see what files they have already uploaded so that they can download them when needed with a time restricted SaS (shared access signatures) token.
Cool, nice start, but if we look at the underline ask, does it have to be exposed to humans? Why not create a micro service that would handle this requirementand delegate the interaction with humans to the application already interacting with users?
The Approach
I decided to use this opportunity and learn Azure Container Apps. For more information on ACA (Azure Container Apps) please review this documentation.
The use of ACA provides significant security benefits (among others) with respect to VNet (Virtual Network) integration. I did consider using Azure Functions, however, when comparing the SKU of Azure Function that supports VNet integration to the potential cost of use of ACA, the ACA would incur lower costs.
While ACA can integrate with a VNet, my initial sample repo does not include it yet. I decided to focus on minimal applicative and network capabilities keeping it simple.
I also decided to ensure readers who want to experiment with the code would have a quick way to do it. This is the reason time was spent on creating the bicep code that spins up the entire solution.
There are no application settings which include secrets; all connection strings or keys would be stored in Azure Key Vault, while the access to this vault is governed by RBAC (role-based access control) and only specific identities can access it.
I used .NET Core 6 as the platform using the C# language. The secured web Api template was my initial version, as it provides most of what is required to create such a service, wrapping it as a container and deploying it to ACA was the additional effort.
The Solution
Here is a diagram of the solution components:
Components
Container App – create SaS tokens and containers, it also provides SaS for given file within a given container.
Storage Account – DMZ (demilitarized zone), all content is considered unsafe
Container App – verify content and move it to verified storage
Verified storage, content is assumed to be verified and has minimal or no threat to the organization
Container Registry – holds the container app images
Azure Key Vault – holds connection strings and other secured configuration
App registration for the ACA app
Azure Active Directory – initial solution is for single tenant applications
With the initial drop, content validation is out-of-scope.
My repo (will be moved to Azure Samples) also includes few GitHub actions that perform the following activities:
Build the image and push to ACR (Azure Container Registry)
Deploy an image to the ACA
Create a release – note this might not be required by developers using this sample. This action was created to allow developers to use this sample.
Bicep is used to provision all required resources, excluding the AAD (Azure Active Directory) entities and the resource group in which all components would be provisioned.
The best practice is to avoid using the “latest” tag; as a user of ACA, you currently do not have the ability to control the image-pulling trigger, which is the equivalent of “Image Pull Policy” in Kubernetes. Instead, use a unique, autogenerated tag, which can be generated by your CD (Continuous Deployment) pipeline. In the sample repo, The GitHub Action uses the git commit hash as the image tag.
When working locally, you can leverage the setting file, but when working with ACA, i decided to leverage enviorment variables. My next learning was based on the following question:
How ami I going to inject these values into an environment provisioned by the Bicep script?
Well, the answer is, to use environment variables. Also, when working locally, you would be able to use the pattern ‘AzureAd:Audience‘. However, when using ACA, you would need to use a slightly different pattern: ‘AzureAd__Audience‘, with the double underscore indicating a section drill-down. (The reason is the operating system)
Note, It will takes time for changes to reflect in the GUI (graphical user interfaces) is minutes.
.NET Core 6 : Key Vault integration
Using .NET Core 6 allows programmers to focus on the applicative content they want to create. It is, in some cases, a double-edged sword since some of the logs and activities are masked.
For example, when you wish to use a secret from Azure Key Vault, you can access it as if it were part of your configuration, assuming you registered it correctly:
builder.Configuration.AddAzureKeyVault(
new Uri($"https://{builder.Configuration["keyvault"]}.vault.azure.net/"),
new DefaultAzureCredential(new DefaultAzureCredentialOptions
{
ManagedIdentityClientId = builder.Configuration["AzureADManagedIdentityClientId"]
}));
This single line (separated for ease of reading) registers the Key Vault, assigning the managed identity as its reader. Note that in many cases managed identities would require just a subset of the secrets – for further reading and best practices please follow these guidlines.
Once you have done this, accessing a secret from your code would look like this, where the ‘storagecs’ is a secret configured by the bicep code.
Again, one line that assumes you have a JSON (JavaScript Object Notation) section in your App setting named ‘AzureAd’ which contains all required details to perform an authentication and authorization.
Let us unpack the above settings, to explain that it took some time to understand that without the last two items, the default authentication will fail. The .NET platform will check that the value of the ‘audience’ claim in the JWT (JSON Web Token) matches the one defined in the registered application.
The last setting tells the platform not to check for any other claims or roles. If you need that type of authorization, it is up to you to implement, here is an example how-to guide.
GitHub : Releases
One of my initial dilemmas was, how can I spin a fully functional environment, which requires an image to be available for a pull when the ACA is provisioned. With the help of Yuval Herziger I created a GitHub action that is triggered on a release, which would build a vanilla image of my code, and store it in the ghcr.
Authentication : The right flow
Long story short, unless you know which flow you are trying to implement, you can find time passed with minimal progress. So, choose the right flow. Henrik Westergaard Hansen helped me here. He listened to my use cases and said my flow should be the client credential flow, as its service-to-service communication. I cannot emphasize enough how important it is; the moment I understood it, the time for completion was hours.
This article is contributed. See the original author and article here.
Happy Friday, MTC! It’s 11/11 – time to make a wish – and lets see what’s been going on in the community this week!
MTC Moments of the Week
Our MTC Member of the Week spotlight is on @Chandrasekhar_Arya for being a rockstar in the Azure forums, both in starting discussions and helping out other MTC’ers! We really appreciate your contributions to the community :)
We also had a brand-new episode of Tips and tricks featuring @Christiaan_Brinkhoff and one of our amazing Windows 365 MVPs, @Ola Ström. You can catch up on demand and hear about Ola’s experiences as well as register for the next event on the Windows in the Cloud page.
Every week, users come to the MTC seeking guidance or technical support for their Microsoft solutions, and we want to help highlight a few of these each week in the hopes of getting these questions answered by our amazing community!
And for today’s fun fact… did you know that Merriam Webster has a Time Traveler page where you can look up what year a word first entered the dictionary? You can even see what words were “born” the same year as you – “meh”, “photoshop”, and “URL” are just a few of mine. So interesting!
I hope you all have a great weekend and a Happy Veteran’s Day. Thank you for your service!
This article is contributed. See the original author and article here.
With Microsoft Dynamics 365 Customer Service 2022 release wave 2, we’ve supercharged the humble bookmark. Now you can save views as report bookmarks. Get back to your personalized, filtered reports faster than a speeding bullet, no cape needed.
Leap tall buildings in a single bound
You likely have at least one dashboard you visit regularly to monitor reports, charts, and other visual breakdowns of your Customer Service KPIs and insights. Chances are, you apply the same filters every time you visit.
Stop wasting all that effort. Adjust the report filters as you likejust onceand save the filtered view as a named bookmark.
The next time you want to check that same view, let your report bookmark do the heavy lifting. With a single bounder, click, the dashboard opens just the way you want it to.
Manage your report bookmarks just as easily
After you’ve created some report bookmarks, you won’t need abilities far beyond those of mortal men to keep them up to date. Need to change a filter value or add a whole new filter? No problem. Adjust the report filters to your liking, then select Bookmarks > Update Bookmark. If you don’t want to keep the change, one click resets everything back to the way it was. It’s that easy. If you no longer need a bookmark, delete it.
It’s just as easy to switch between your saved views using the new Bookmarks panel. You can even set a report bookmark as your personal default view every time you visit.
Bookmarks are available in historical analytics reports and knowledge analytics reports.
We plan to add more features, like bookmark groups, the ability to create a slideshow out of your bookmarks, and more. Stay tuned for the next exciting chapter!
This article is contributed. See the original author and article here.
Issue
A backend compatibility issue was encountered recently when the creation of a non-clustered index on a partitioned table of a hyper-scale Azure SQL DB failed with the error 666. The table in question had almost 3.5 Billion records and already had a clustered Index & 3 other non-clustered indexes present. You may receive an error as shown below:
Error
In addition to the error above- here is the error text Msg 666, Level 16, State 2, Line 25 The maximum system-generated unique value for a duplicate group was exceeded for the index with partition ID. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.;
Workaround/Mitigation
Customers hitting this problem often are recommended to try running the index creation process at compatibility level 160 (instead of the current compatibility level) as the compatibility level 150 or below might use a spool that is directly associated with uniqueifier identifiers that have a max limit of 2,147,483,648. If this limit is reached the index creation fails with the error mentioned above. (Please note that compatibility level could be just one of the factors that may govern the use of a spool) Here is the difference in explain plans when we use compatibility level 160 vs compatibility level 150 (In the current case), notice the index spool (Highlighted in blue)
For database tables having billions of rows even using compatibility level, 160 may not be sufficient as the index creation process may not encounter the error 666 mentioned above but can eventually time out if the create index transaction exceeds 1 TB in the generated transaction log.
The workaround for the same is to make index creation Online & resumable by specifying ONLINE=ON, RESUMABLE=ON. With this, the operation will use many smaller transactions, and it will be possible to resume it from the failure point if it fails for any other reason. Using resumable operations is one of the best practices with large tables. It should also be noted that the database scoped configuration ELEVATE_ONLINE is set to OFF during the index creation process (The default value of ELEVATE_ONLINE is OFF).
In some cases, if the customer has concerns about changing the compatibility level to 160 for the database, we can also recommend them to change the compatibility level of the DB to 160 just before the index creation process, then trigger the create Index statement and then change the compatibility level of the DB back to 150 (After verifying the Index creation process has started successfully).
Other issues related to similar error
Please note that resuming a failed index creation is a manual operation. You can do that by re-executing the original CREATE INDEX command, it will pick up from the point where it failed. Note that by default, paused resumable operations time out after 24 hours. You can control that using the PAUSED_RESUMABLE_INDEX_ABORT_DURATION_MINUTES database-scoped configuration.
It is worthwhile keeping in mind, that for some Big partitioned tables, the rate of progress of the index creation process could be slow if the table has fewer populated partitions. In the test case seen above, the table only had 2 populated partitions & the current plan was running with parallelism (DOP 8), allocating one thread per partition for a total of 8 (plus one coordinator). But there were only two partitions and since one of them is smaller, it had already been processed. So effectively this was running single-threaded now, reading data from the single remaining partition. The index creation process is usually faster if the data is less skewed in partitions in which case the process could even be made faster by adding MAXDOP=16 to the create index statement.
Monitoring the error
It is always recommended to monitor such index creation processes periodically to ensure they are progressing well and are not being blocked by any other processes. Here are some of the DMVs that can help monitoring such an index creation process:
It is always recommended to Check resource utilization in sys.dm_db_resource_stats a few minutes after starting to create the index. If anything (other than memory and log IO) is above 80%, you may want to increase cores even higher.
The progress of the Index creation can be tracked via sys.index_resumable_operations. A sample output looks like this:
More info on waits can be obtained by querying the DMV sys.dm_exec_session_wait_stats
The DMV sys.dm_exec_requests indicates if the create Index statement is blocked.
If we want to check on any wait types and blocking, the DMV sys.dm_os_waiting_tasks can be very helpful.
This article is contributed. See the original author and article here.
In today’s business environment, efficiency is paramount for seller productivity. Sales teams must achieve more with less. Sellers are looking for tools to reduce the time-stealing work that gets in the way of engaging with customers. They need to keep focused and move from one call to the next with ease. Now Microsoft Dynamics 365 Sales can help. We are delighted to announce the general availability of the embedded Teams phone dialer to support outbound and inbound calls. The new phone dialer even automates note capture, improving data quality and ensuring sellers don’t miss a follow-up action. Sellers can take this a step further with optional conversation intelligence to get AI-generated analytics, meeting summaries, and follow-up actions.
Seller productivity benefits from an embedded Teams dialer
Sellers build customer relationships by capturing every nugget of insight they can from a call. With the embedded Teams dialer for Dynamics 365, sellers can make phone calls using the dial pad in the side panel or by selecting a phone number anywhere in Dynamics 365.
Digital selling teams using the sales accelerator can view all their upcoming actions and suggestions. Now they can easily call prospects from the same screen. Results are automatically tracked and summarized in the timeline, reducing the need for manual data input after each call.
The embedded dialing feature uses your organization’s existing Teams telephony service, supporting either a Teams call plan, direct routing, or operator connect.
Get real-time assistance from conversation intelligence
With Dynamics 365 AI-powered conversation intelligence, sellers get real-time assistance during sales calls. They can focus on building relationships and forget about forgetting. Sellers and managers can view aggregate statistics across the team. Reports highlight customer trends, help them understand the competition, and provide insights to coach sellers on best practices.
Let’s look at the capabilities and options available to help sellers stay focused on their best next actions.
Connect with customers right in Dynamics 365
With the Microsoft Teams dialer for Dynamics 365, sellers are more focused and efficient. Calling a customer is simple. Sellers can use any phone number recorded in Dynamics 365 to place a call. A built-in search tool makes finding contacts easier. Call activity is automatically logged with all essential details, sparing them tedious manual entry after the call and immediately increasing seller productivity.
The embedded Teams dialer also supports incoming calls. When sellers receive a call, the dialer searches Dynamics 365 for a potential matching record. Sellers can quickly open the relevant record, review the information, and be ready to answer the call with maximum context. If the search returns multiple matches, sellers can review the options in the incoming call notification and select the right one. If there are no matches, sellers can manually associate the contact with a new record that’s created automatically.
On top of improving seller preparation before the call, we are also supporting sellers during their sales calls by including a built-in notepad in the embedded dialer. Sellers can take notes during their calls without having to navigate elsewhere. The notes are automatically saved to the phone call’s activity timeline.
Easily enable calls to boost your seller productivity
Setting up the dialer experience is easy. Settings control how you enable it, for what types of calls, where, and for which security roles. Configure what works best for your business needs.
For example, you can enable the dialer for outbound calls only or both inbound and outbound depending on your teams’ work habits. You can enable it for inbound calls from external numbers only to help your sellers focus on customer engagement when they are in the Dynamics 365 environment. By default, the dialer displays in the Sales Hub, our default sales experience optimized for sellers. However, the dialer also supports custom apps. You decide which security roles you’d like to enable the experience for, making sure access is available only to those who need it.
Supercharge seller productivity with conversation intelligence
Seller productivity is at the core of a successful sales operation. With AI-powered conversation intelligence, sellers can focus on their conversations with customers, not on taking notes. With call recording enabled, conversation intelligence acts as an assistant right in the dialer’s side panel.
Sellers can view a real-time call transcription with business-critical insights such as key questions asked, detected action items, intelligent notes, and a call summary. The call summary provides a jump start to quality follow-up notes in the moment rather than piled up at the end of a week.
Managers have the tools they need to spot trends and better understand their customers and any patterns that need addressing.
Post-call analysis: Just a few seconds after a call ends, managers can access a rich call summary. The summary includes sentiment analysis, automatic call segmentation, call playback, and a transcript, where they can leave messages for their team members. Conversation intelligence also automatically tags calls, so managers know on which calls they should focus.
Better understand customers with advanced insights and interactions styles: Managers get a wide perspective of customers’ needs and interests in real time. They can use aggregated data to analyze market trends, rising competitors, and overall sentiment, and can dive into the details where needed.
Control conversation intelligence usage to support your sales team
We recognize that not all sales calls need to be treated equally. We made sure you can precisely control the usage of conversation intelligence capabilities across different locations and specialties.
For example, you can enable the capabilities based on security roles so that only the right people have access. Ensure compliance to any internal, external, or government policies by controlling:
whether calls are recorded
the way calls are recorded (manually or automatically)
who is being recorded (only sellers, or both sellers and customers)
where the analyzed data is stored
the data retention policy
You also control which languages are available for analysis. Decide the number of conversation intelligence processing hours available through the dialer experience to keep track of usage, spend, and adoption.
Help your sellers take back those lost hours of manual actions and keep on top of their growing customer relationships!
Recent Comments