This article is contributed. See the original author and article here.
We are pleased to announce the security review for Microsoft Edge, version 121!
We have reviewed the new settings in Microsoft Edge version 121 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 117 security baseline continues to be our recommended configuration which can be downloaded from theMicrosoft Security Compliance Toolkit.
Microsoft Edge version 121 introduced 11 new computer settings and 11 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documentedhere, and all available settings for Microsoft Edge Update are documentedhere.
This article is contributed. See the original author and article here.
In a rapidly changing business landscape, organizations face numerous challenges in meeting their customers’ expectations and staying relevant. With the COVID-19 pandemic driving a sudden shift to remote work and the introduction of new technologies, many struggled to keep up with the pace of change and financial pressures.
In this blog post, we’ll showcase some of our recent success stories with Microsoft Dynamics 365 Sales customers in the financial services and manufacturing industries. From improving client relationships to streamlining operations and reducing overhead costs, each of these organizations uses Dynamics 365 Sales to overcome unique challenges and achieve outstanding business outcomes. So, whether you’re in financial services or manufacturing, join us as we explore real-world examples of how Dynamics 365 Sales can help you succeed in today’s market.
Succeed with Dynamics 365 Sales
Streamline operations and achieve outstanding outcomes.
Revolutionizing the finance industry: How Dynamics 365 Sales is helping financial institutions build stronger client relationships
Customers and investors of banks and insurance companies expect a personalized experience that incorporates their unique needs. Long-term clients expect these institutions to know them, and proactively approach them with services that are relevant to them. However, large financial institutions tend to spread across the globe, and different divisions must offer different services and products based not only on local markets, but also on changing regulations.
To tackle these challenges, Investec, a global financial services company, uses conversation intelligence in Dynamics 365 Sales to transcribe sales calls accurately and analyze the content. This helped build stronger client relationships, identify appropriate next steps, and ultimately save time and reduce overhead costs.
Franklin Templeton is one of the largest asset management companies in the world and prides itself on effective stewardship of its clients’ capital. After recent acquisitions, it aimed to restructure its many inherited customer relationship management (CRM) systems under one do-it-all sales platform to gather customer data efficiently. Through proof-of-concept trials, the Franklin Templeton technology team found Dynamics 365 Sales to be the best CRM solution for its pre-built integrations and user-friendly interfaces, improving its relationships with customers and streamlining its operations.
Empowering the manufacturing industry with Dynamics 365 Sales
In the manufacturing industry, companies are required to coordinate their work with multiple internal departments, partners, and customers. At the same time, buyers are looking for consistent experiences. Traditional dealer networks have been key in this industry, but now, end customers are looking for direct contact with the manufacturer. Let’s have a look at some of the successful Dynamics 365 Sales customers in this industry.
Lexmark, a global provider of printing and imaging technology, needed a sustainable path to digital transformation by overhauling its sales and reporting processes. Lack of integration between different platforms within the company and its complex product and service ecosystem made it difficult to build configurations using its old configure, price, quote (CPQ) system. Using Dynamics 365 and Experlogix CPQ, the company integrated its CRM and CPQ system, resulting in a 43% drop in quote revisions and significant reduction in time-to-quote.
Andreas Stihl AG & Co. KG, a German-based company, develops, manufactures, and distributes power tools for professional and private users in the forestry and agriculture, landscape maintenance, and construction sectors. STIHL’s customers want consistent experiences across all touchpoints—online, print, or on-site at the dealer. However, STIHL didn’t have a unified CRM system. To overcome these challenges, STIHL adopted a central solution that would bring transparency to its business processes, combining dealer and customer data. STIHL rolled out its OneCRM, basing it on Dynamics 365 Sales and Customer Service. This solution provides a 360-degree view of customers and specialist dealers. By implementing this solution, STIHL significantly sped up its customer support response, and improved transparency within and between its sales subsidiaries worldwide.
As a premier supplier of transportation solutions, Siemens Mobility has been for 160 years handling complex solutions that require working with many departments, customers, and partners. In spring of 2020, the company had an urgent need for a CRM solution that could keep pace with its highly collaborative selling process and intricate customer journey. In just five months, Siemens transitioned fully to the new CRM solution. Since then, Siemens Mobility has been using Dynamics 365 to personalize and streamline marketing communication and to accelerate their tender-based sales processes. Dynamics 365 is used all the way from lead acquisition to deal closure including lead generation, lead qualification, and account and opportunity management. With all these processes in the same system, Siemens can easily follow process performance across all touchpoints and continue tuning the ways of working to keep equipping the world with seamless, sustainable, and reliable transport solutions.
Dynamics 365 provides visibility on all touchpoints within a sale or service at Siemens Mobility.
Looking ahead with Copilot in Dynamics 365 Sales
You’ve seen how Dynamics 365 Sales has helped five customers from the financial services and manufacturing industries achieve their sales goals. Each faced unique challenges, but they all shared a common vision: to deliver more value to their customers. They’re not done yet—some of them are already exploring Microsoft Copilot for Sales capabilities to gain further insights and guidance.
“At Investec, we are very excited to see how we can leverage Copilot and AI within the Microsoft stack to connect our internal teams and to enhance our understanding further of prospective and current clients to ensure we are providing a best-in-class experience.”
—Dan Speirits, CRM Product Manager at Investec
Join our customers on their continued journey, ensuring their success and their customers’ success with the use of Dynamics 365 Sales and Microsoft Copilot for Sales.
This article is contributed. See the original author and article here.
Over the past year, generative AI has seen tremendous growth in popularity and is increasingly being adopted by people and organizations. At Microsoft, we are deeply focused on minimizing the risks of harmful use of these technologies and are committed to keeping these tools even more reliable and safer.
This article is contributed. See the original author and article here.
Introduction
There are scenarios wherein customer want to monitor their transaction log space usage. Currently there are options available to monitor Azure SQL Managed Instance metrics like CPU, RAM, IOPS etc. using Azure Monitor, but there is no inbuilt alert to monitor the transaction log space usage.
This blog will guide to setup Azure Runbook and schedule the execution of DMVs to monitor their transaction log space usage and take appropriate actions.
Overview
Microsoft Azure SQL Managed Instance enables a subset of dynamic management views (DMVs) to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on.
Using DMV’s we can also find the log growth – Find the usage in percentage and compare it to a threshold value and create an alert.
In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE permissions.
GRANT VIEW SERVER STATE TO database_user;
Monitor log space use by using sys.dm_db_log_space_usage. This DMV returns information about the amount of log space currently used and indicates when the transaction log needs truncation.
For information about the current log file size, its maximum size, and the auto grow option for the file, you can also use the size, max_size, and growth columns for that log file in sys.database_files.
Solution
Below PowerShell script can be used inside an Azure Runbook and alerts can be created to notify the user about the log space used to take necessary actions.
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
$Threshold = 70 # Change this to your desired threshold percentage
try
{
"Logging in to Azure..."
Connect-AzAccount -Identity
}
catch {
Write-Error -Message $_.Exception
throw $_.Exception
}
$ServerName = "tcp:xxx.xx.xxx.database.windows.net,3342"
$databaseName = "AdventureWorks2017"
$Cred = Get-AutomationPSCredential -Name "xxxx"
$Query="USE [AdventureWorks2017];"
$Query= $Query+ " "
$Query= $Query+ "SELECT ROUND(used_log_space_in_percent,0) as used_log_space_in_percent FROM sys.dm_db_log_space_usage;"
$Output = Invoke-SqlCmd -ServerInstance $ServerName -Database $databaseName -Username $Cred.UserName -Password $Cred.GetNetworkCredential().Password -Query $Query
#$LogspaceUsedPercentage = $Output.used_log_space_in_percent
#$LogspaceUsedPercentage
if($Output. used_log_space_in_percent -ge $Threshold)
{
# Raise an alert
$alertMessage = "Log space usage on database $databaseName is above the threshold. Current usage: $Output.used_log_space_in_percent%."
Write-Output "Alert: $alertMessage"
# You can send an alert using Send-Alert cmdlet or any other desired method
# Send-Alert -Message $alertMessage -Severity "High" Via EMAIL - Can call logicApp to send email, run DBCC CMDs etc.
} else {
Write-Output "Log space usage is within acceptable limits."
}
There are different alert options which you can use to send alert in case log space exceeds its limit as below.
If you have feedback or suggestions for improving this data migration asset, please contact the Data SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support!
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide
This article is contributed. See the original author and article here.
As Microsoft continues to invest in AI technologies across Dynamics 365 and Power Platform, many enterprise organizations are rapidly adopting Copilot in Microsoft apps such as Dynamics 365 Customer Service. Unlike solutions in other business areas, customer service solutions are particularly sensitive for a couple of reasons.
First, the customer service team acts as the organization’s frontline, dealing directly with customer inquiries and issues. Moreover, most interactions between support agents and customers occur in real-time, leaving zero tolerance for error. Any customer frustration can easily impact the customer satisfaction rate.
Additionally, introducing a new tool like Copilot in Customer Service to customer service agents must be well-tested and validated. In the era of AI and generative AI, organizations face the critical question of how to build their testing strategy for these innovative tools.
Copilot business value
Before delving into Copilot test cases, let’s quickly discuss the business value of Copilot in Customer Service. Copilot and AI features in the customer service world act as an agent assistant. Copilot helps agents with tasks such as retrieving information from the knowledge base, drafting emails, or providing quick summaries of customer conversations or cases with long threads, multiple notes, and emails.
Leveraging Copilot in Customer Service brings quick wins to the business. For instance, reduced handle times for customer requests allow agents to focus on core tasks. And since agents can provide more accurate and timely responses, organizations see improved customer satisfaction levels.
A closer look at each Copilot feature reveals the need for agent review before presenting any information to the customer. Take, for example, the case summary feature. A disclaimer indicates that this is an AI-generated summary, emphasizing the need to “Make sure it’s appropriate and accurate before using it.” This highlights the critical role of human oversight in ensuring the accuracy and appropriateness of AI-generated content. It reinforces the value of Copilot as a supportive tool rather than a replacement for human judgment and expertise.
Defining success metrics
Having covered the basics, it’s crucial to establish a success matrix for implementing Copilot in Customer Service. Most enterprise customers follow a standard process for introducing new tools or features. While this approach is recommended and applicable to almost all new Dynamics 365 features, the success criteria for Copilot should address several specific factors, due to its unique functionalities and impact:
Time efficiency: Measure the amount of time Copilot saves agents in performing their tasks. This can be quantified by comparing the time taken to complete tasks with and without the assistance of Copilot.
Relevance and helpfulness of responses: Evaluating Copilot’s responses isn’t as straightforward as saying they’re right or wrong. Measure their effectiveness with a percentage that shows how relevant and helpful these responses really are. When it comes to measuring Copilot’s impact, we look at it like this:
Totally irrelevant: Assistance that does not address the agent’s inquiry at all, providing no useful information for handling customer queries.
Partially helpful: Responses that offer some relevant information but may not fully equip the agent to resolve the customer’s issue, possibly requiring further clarification or additional resources.
Mostly helpful: Assistance that is largely on point, providing substantial information and guidance towards resolving the inquiry, with minimal need for further action.
Completely helpful: Responses that fully equip the agent with the necessary information and resources to address and resolve the customer’s issue without any need for additional support or clarification.
Agent satisfaction and ease of use: Assess how user-friendly and intuitive Copilot is for customer service agents. Agent satisfaction with the tool can be a key indicator of its usability and effectiveness in a real-world setting.
Impact on customer satisfaction: Monitor changes in customer satisfaction metrics. You can do this through surveys or analyzing customer feedback. See if there is a noticeable improvement due to the implementation of Copilot.
Return on investment: Consider the overall costs versus the benefits of implementing Copilot. This evaluation is crucial, as it is important to test and evaluate any feature intended for user adoption. Remember, Copilot is not a new product but a feature within Dynamics 365 Customer Service. It incurs no extra cost for most customers.
Start your Copilot journey with confidence
The best way to test and measure Copilot’s success is through real scenarios, real agents, and real customers in a production environment. This is why we recommend starting quickly with a pilot or initial phase and gradually rolling out Copilot capabilities. You can closely monitor the results and feedback during the initial phase.
We use the name ‘Copilot’ and not ‘Autopilot’ for a good reason. Essentially, Copilot in Customer Service acts as an assistant to the agents. While it proves useful in some situations, there are instances when questions or requests become too complex, requiring human expertise. However, even in these scenarios, business operations continue seamlessly, thanks to the human agents.
In Customer Service, think of each Copilot feature as being in one of two categories: those that do not rely on the knowledge base and those that do.
The easiest way to begin is with the first category, which includes summarization features. This category has minimal risk and requires less change management effort. This article provides in-depth information on this.
Test and optimize Copilot
A pilot phase is vital for testing Copilot, where you will document the results and collect feedback from your agents. The best candidates for the pilot phase are the highly skilled agents. They have the expertise to deal with customer questions efficiently, allowing them to give thorough feedback without affecting the normal call center functions. Moreover, they help ensure the proper use of Copilot, avoiding any incorrect or unverified information being passed from Copilot to customers.
During the pilot phase, you need to keep track of your success metrics and aim for ongoing improvement. This mainly involves improving the knowledge base articles. Copilot in Customer Service is not a magic tool; its performance depends on the quality of the information it can access. Providing Copilot with clear and complete knowledge articles will help it to produce clear and correct results.
Microsoft is heavily investing in integrating AI capabilities into Dynamics 365. Organizations with live implementations of Dynamics 365 Customer Service should view this as an opportunity to enhance their customer service operations. While testing remains essential, they should not hesitate to deploy these native capabilities in production mode, especially since Copilot in Customer Service comes without any extra licensing costs.
Generative AI is evolving rapidly, and organizations that start to adopt and utilize it early will secure a competitive advantage in the future!
This article is contributed. See the original author and article here.
The pace of business operations continues to accelerate daily, presenting an ongoing challenge for employees who increasingly struggle to keep up. They often find themselves overwhelmed by the very tools designed to enhance their work, leading to frequent switching between various documents, apps, and websites as they hunt for data.
Harvard Business Review addresses this issue in its article How Much Time and Energy Do We Waste Toggling Between Applications?, where it describes the prevalent “swivel chair” approach to work, which has become the norm for most employees. This is primarily because many software applications weren’t originally designed to connect with each other. Consequently, employees often serve as the connective tissue bridging the gap between these disparate applications. They engage in manual processes of fetching, transforming, and submitting data from one system to another, constantly shifting between apps. This practice is both time-consuming and mentally draining.
The true cost of this constant app-switching becomes apparent when we consider that the average user toggles between different apps nearly 1,200 times each day, spending approximately four hours per week reorienting themselves after switching to a new application. Annually, this adds up to a staggering five working weeks, accounting for a significant 9% of their total work time.1
What’s the solution to this productivity-sapping dilemma? The answer lies in connecting business systems and productivity tools, providing employees with easy access to the information they need without switching between applications. Seamless sharing of data across tools and applications not only simplifies access for employees but also lays the foundation for AI and Microsoft Copilot to offer proactive insights and assistance within their everyday tools.
With Microsoft Dynamics 365 Business Central and Microsoft 365, businesses can establish a unified experience where data seamlessly connects with productivity apps including Excel, Outlook, and Microsoft Teams. This connectivity ensures that employees can access timely information, gain valuable insights, and collaborate directly within the tools they use daily—all without the need to switch between applications.
By harnessing connected solutions powered by real-time data, businesses can begin to unlock the full potential of AI-enabled productivity with Microsoft Copilot. With Copilot, businesses can automate tasks and guide users through assisted workflows—saving them time, improving collaboration, enhancing decision-making, and allowing employees to focus on what truly matters—driving business success.
More collaboration with Business Central and Teams
Modern workplaces are challenged by fragmented data and communication tools—employees often find themselves juggling various apps and struggling to disseminate timely information, making collaboration difficult. With Teams connected to Business Central, your organization can efficiently share and interact with real-time data, transforming Teams into a centralized hub for your daily operations.
With Business Central and Teams, employees can:
Make data accessible and collaborative. With Teams connected to Business Central, timely data can be shared in group chats or channels, transforming Teams into the hub for all and daily operations that unite employees, processes, and the data they need to work together.
Take action from the app they prefer. From Business Central, quickly share data to jumpstart conversations in Teams. From Teams, stay in the flow of work by viewing and editing business data without having to switch apps.
Streamline collaboration across departments. Empower each department to self-serve by unlocking the data they need to work better together—even without a Business Central license. Get read-only access to Business Central data in Teams at no additional cost with your Microsoft 365 license.
More productivity with Business Central and Excel
Employees often find themselves working in specific applications that align with their roles and responsibilities. For finance and operations teams, Excel is a fundamental tool that plays a pivotal role in their daily tasks. Enabling these teams to maximize their productivity within their preferred application can lead to significant productivity gains.
With Business Central and Excel, employees can:
Simplify daily tasks. Export any Business Central data to an Excel worksheet to capture data snapshots or share for review. Save time by updating records in bulk in Excel and uploading the revised records to Business Central with just a few clicks.
Go from raw data to insights faster. Get timely operational insights from Business Central as Excel reports and adapt quickly by customizing report layouts as Excel worksheets. Easily analyze transactions and business data using pivot tables, charts, and calculations to get answers quickly.
Collaborate in the tools where teams work best. Streamline team-based activities like budgeting and planning with multi-player co-authorship and functionality. Create, edit, and access Excel documents as a team, then publish the final outcomes back to Business Central.
More impact with Business Central and Outlook
At the core of most businesses lies the unwavering commitment to deliver exceptional products and services to its customers. To achieve this goal, businesses must foster strong and meaningful relationships with their clients, vendors, and stakeholders. With Business Central and Outlook working together, employees gain valuable business insights delivered directly to their inbox, so they can save time while staying focused on delivering extraordinary experiences.
With Business Central and Outlook, employees can:
Enhance customer experience directly from their inbox. Connect real-time data from Business Central to Outlook. Save time with visibility into customer and vendor information like sales, purchase details, and more without leaving their inbox.
Stay in the flow of work. Use templates to quickly send payment reminders, order confirmations, and other emails directly from Business Central connected to a shared mailbox.
Go from quote to cash without leaving Outlook. Set up customers or vendors, create quotes, submit invoices, and more from within Outlook so employees can focus on the task at hand.
Embrace the future of work with AI and Dynamics 365 Business Central and Microsoft 365
When Dynamics 365 Business Central and Microsoft 365 work together, small and medium-sized businesses can boost productivity and redefine how work gets done. With data delivered directly to familiar apps like Excel, Outlook, and Teams, employees get the information they need without switching between applications. Using next-generation AI with Microsoft Copilot, employees can further streamline routine tasks like drafting content, summarizing meetings, providing email follow up, and quickly finding answers to questions—all within the tools where they work best.
This article is contributed. See the original author and article here.
Ready to take your productivity to the next level with Copilot for Microsoft 365? FastTrack for Microsoft 365 is here to help you get started! FastTrack is a service designed to help organizations seamlessly deploy Microsoft 365 solutions to better allow users to work effectively and productively. FastTrack assistance is available for customer tenants with 150 or more licenses from one of the eligible plans from the following Microsoft product families: Microsoft 365, Office 365, Microsoft Viva, Enterprise Mobility & Security, and Windows 10/11. These plans can be for an individual product (like Exchange Online) or a suite of products (Office 365 E3).
FastTrack is a benefit that supports the readiness of customers to prepare for Copilot enablement. With FastTrack, you can confirm that you meet the minimum required prerequisites to enable Copilot across your users and find opportunities for optimizing the Copilot experience. This would include the deployment of Intune, Microsoft 365 Apps, Purview Information Protection, and Teams Meetings. As part of this process, FastTrack can also recommend best practices for driving healthy usage across your organization.
Tenant admins can click here to leverage the FastTrack self-service deployment guide for Copilot for Microsoft 365 to start implementing the pre-requisites, allowing your organization to transform collaboration and take advantage of AI to automate tasks such as writing, editing, and data visualization across Word, Excel, PowerPoint, Outlook, and Teams. Copilot also simplifies the creation of meeting summaries, making it easier to catch up and collaborate asynchronously. Our setup guide facilitates smooth integration, allowing your organization to automate work processes and enhance collaboration seamlessly.
This article is contributed. See the original author and article here.
Context
Azure OpenAI Service’s Provisioned Throughput Units (known as “PTUs”)have been all the rage over the past few months. Every enterprise customer has been wanting to get their hands on their own slice of Azure Open AI Service. With PTUs, they can run their GenAI workloads in production at scalewith predictable latency and without having to worry about noisy neighbors. Customers of all sizes and from all verticals have been developing groundbreaking applications, usually starting with the Pay-as-you-go (PayGo) flavor of Azure Open AI.When the time comes to deploy an enterprise–grade application to production however, most rely on reserving capacity with PTUs. These are deployed within your own Azure Subscription and allow you to enjoy unencumbered access to the latest models from Open AI such as GPT-4 Turbo. Because PTUs are available 24/7 throughout the month for use, customers need to shift the paradigm of utilizing tokens into utilizing time when considering cost. With this shift often comes the challenge of knowing how to right-size their PTUs.
To aid in that exercise, Microsoft provides tools such as the PTU calculator within the AI Studio experience. These tools, however, make assumptions such as PTUs being able to handle peak load. While this could be avalid approach in many cases, it’s only one way of thinking about choosing the right size for a deployment. Customers often need to consider more variables, including sophisticated architectures to get the best return on their investment.
One pattern that we have seen emerge is the spillover, or bursting, pattern. With this pattern, you do not provision PTUs for peak traffic. Instead, you define the amount of PTU serviced traffic that the business can agree upon, and you route the overflow to a PayGo deployment. For example, your business may decide that it’s acceptable to have 90% of the traffic serviced by the PTU deployment with a known latency and to have 10% of overflow traffic serviced with unpredictable performancethrough a PayGodeployment. I’ll go into detail below on when to invoke this pattern more precisely, but if you are looking for a technical solution to implement this, you may check out this post: Enable GPT failover with Azure OpenAI and Azure API Management – Microsoft Community Hub .The twist is that depending on the profile of your application, this 10%degraded performance can save you north of 50% in unused PTU cost.
If as you’re reading this, you have found yourself in this predicament, you have come to the right place. In this blog post, we try to convey the message that PTUs done right are not necessarily expensive by characterizing customer’s scenarios anecdotally. The three application scenarios we will review are known as:The Unicorn, The No-Brainer, and The Problem Child.
The Unicorn
We will go quickly over the unicorn since nobody has ever seen it and it might not even exist. But just in case, the Unicorn application sends/receives token on a perfectly steady basis, weekdays, weekends, daytime, nighttime. If you ever have one of those, PTU makes perfect sense, you get maximum value and leave no crumb on the table. And if your throughput is meaningful in addition to being constant, you will likely also save lots of money compared to a PayGo deployment, in addition from reaping the predictable and low latency that comes with PTUs.
The No–Brainer
Next up is our No-Brainer application. The No-Brainer application profile has mild peaks and valleys. The application sends a constant baseline of tokens to the model, but perhaps there are a couple of peak hours during the day where the application sends a little more. In this case, you sure could provision your PTU deployment to cover the valley traffic and send anything extra to a PayGo deployment. However, in the No-Brainer application, the distance between our peak and valley is minimal, and, in this case, the juice might now be worth the squeeze. Do we want to add complexity to our application? Do we want to invest the engineering time and effort to add routing logic? Do we want to introduce possibly-degraded service to our application andperhaps not even be able to provision a lesser amount of PTUs increments? Again, it all comes down the distance between your peaks and valleys. If those are close, purchase enough PTU to cover for peak. No brainer.
The Problem Child
The Problem Child is that application where the traffic is bursty in nature and the variance in throughput is high. Perhaps the end of the quarter is near, and the company is behind on revenue, so every seller is hitting their sales copilot hard for a couple daysin an attempt to bridge the gap to quota. How do we best cover the Problem Child with PTUs?
Option 1: Provision for peak
As we discussed above, our first inclination could be to provision for peak and that is also what most calculators will assume that you want to do so that you can cover all demand conservatively.In this instance, you maximize user experience because 100% of your traffic is covered by your PTU deployment and there is no such thing as degraded service. Everyone gets the same latency for the same request every time. However, this is the costly way to manage this application. If you cannot use your PTU deployment outside peak time, you are leaving PTU value on the table. Some customers are lucky enough to have both real-time and batch use cases. In this case, the real-time use cases utilize the PTU deployment during business hours; during downtime, the customer is then free to utilize the deployment for the batch inferencing use cases and still reap the PTU value. Other customers operate on several time zones and when one team goes offline for the day, somewhere 8 hours behind, another team comes online, and the application maintains a steady stream of tokens to the PTU deployment. But for a lot of customers, there isn’t a way to use the PTU deployment outside of peak time and provisioning for peak might not always be the soundest business decision. It depends on budgets, UX constraints and importantly, how narrow, talland frequent the peak is.
Option 2: Provision for baseline
In option 2, the business is amenable to a trade-off. With this tradeoff, we bring our Azure Open AI cost significantly down at the expense of “some” user experience. And the hard part is to determine how much of the user experience we are willing to sacrifice and at what monetary gain. The idea here is to evaluate the application on a PayGo deployment and see how it performs. We can consider this to be our degraded user experience. If it so happens that our peaks are tall, narrow and rare, and if we are willing to say that it’s acceptable for a small slice of our traffic to experience degraded performance during peak time, then it is highly conceivable that sacrificing 5% of your requests by sending them to a Paygo deployment could yield, 30%, 40% maybe 50% savings compared to the option 1 and provisioning for peak.
The recognized savings will be a function of the area in green below.
Conclusion
PTUs can be perceived as expensive. But this could be that provisioning for peak is assumed. And this could be the best way to go if your business is such that you want to always ensure minimum latency and the best possible user experience. However, if you are willing to combine PTU with a little bit of PayGo (if your application profile lends itself to it), you could realize significant savings and reinvest the leftovers in your next GenAI project on Microsoft Azure… and also buy me a latte.
This article is contributed. See the original author and article here.
More Speaking in Ciphers and other Enigmatic Tongues with a focus on SCHANNEL hardening.
Hi! Jim Tierney here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of crypto excitement. I have elucidated at length on this topic in this post which had been updated a few years back to the aptly titled, Speaking in Ciphers and other Enigmatic tongues…update!
I am creating this brand-new piece of content in this crypto space to further discuss different Microsoft supported methods that can be used to disable weak cipher suites and protocols.
The scenario we are addressing is that your company is doing a vulnerability and compliance assessment, and they just ran a scanning tool against all your Windows Servers. The software reports back that you have weak ciphers enabled, highlighted in RED and including a link to the following Microsoft documentation – KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll: http://support.microsoft.com/kb/245030/en-us
You immediately open a case with Microsoft asking…. What can I do? What can I do?
There are two Microsoft supported methods of configuring cipher suites:
NOTE: We strongly suggest NOT modifying the real registry location. Instead, we recommend leveraging the Group Policy setting below to manage the list of ciphers supported in the Operating System. If the Microsoft development team supports a new cipher, they could end up putting back ciphers you removed from this default location if you do this.
Configuring the Group Policy for Cipher suite ordering/content will overrule what is listed in this default location. Here is the location of the Cipher Suite ordering group policy:
Computer ConfigurationAdministrative TemplatesNetworkSSL Configuration SettingsSSL Cipher Suite Order
Remember, when configuring the Cipher suite order policy, If the 1023 size is passed, Cipher suites will be truncated because the list exceeds the 1023-character limitation for the
*In addition, Windows Server 2016 and newer do not require the _PXXX suffixes, so the list of cipher suites is a lot shorter. Please note that Win10/2016 and above solves this problem in 2 ways:
Elliptical Curve (EC) suffixes (also known as the _P values) are no longer part of the cipher suite names, therefore there is no more Cartesian explosion of cipher suite names (e. g. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384, …)
In Windows 10, curves are prioritized separately from cipher suites, which means the cipher suite list in the GP Editor is much shorter.
NOTE: These EC suffixes ARE required for Windows Server 2012 operating systems to limit the ciphers on the OS. However, Windows 10/2016 OS DOES NOT support these cipher names. So, if you still need to support Windows Server 2012 (you have my sympathy) then you will need to have a GPO for this OS specifically, and then we would also recommend that the GPO be configured with a WMI Filter for the OS version.
Specifically for Windows PowerShell, the article below mentions how to update PowerShell scripts or the related registry settings to ensure 1.2 is used:
Restricting supported TLS / SSL Protocols that are used. If you have been using an old moldy script to configure SCHANNEL content on your Windows servers, you must seriously consider updating or rethinking this method and figure out the SCHANNEL protocols you want to disable on ALL these servers and configure ONLY WHAT YOU WANT DISABLED. TLS 1.2 is ENABLED by default in EVERY OS starting with WINDOWS 2012. YOU DO NOT NEED TO CREATE A REGISTRY SETTING FOR TLS 1.2
Enforcing the use of TLS 1.2 will require DISABLING any other protocol (i.e., TLS 1.0 and 1.1). Disabling SCHANNEL protocols and cipher suites can affect interoperability. Especially connectivity to applications, services and servers that are not current versions of their product.
What Ciphers should I leave enabled?
My advice regarding ciphers is to stick with the default cipher suites for their Windows version. These ciphers are carefully chosen and prioritized to provide a balance of interoperability, performance, and security. If there are specific security requirements, then a change to the list of the cipher suites and their priorities is needed. Some applications (third party, or Microsoft) may still need lesser TLS versions, so testing any SCHANNEL registry modifications is necessary.
Applications that might need older protocol versions.
WinINET – Typically this is a user base application like any Office application that runs as the user account logged onto the system. They are going to be applications that run with an interactive desktop. It would be Internet Explorer, Or Edge running in IE (Internet Explorer) Mode. It DOES not include Edge/Chromium browsers, however.
If you are still with me and have been poking around in the registry (on a test computer), you may have noticed the following location and would like some information regarding –
The value content of this location only affects TLS 1.2
Operating systems prior to Windows 2008 SP2 standard do not support this value item.
The data in the Functions value refer to the signature/hash combinations that are supported on TLS 1.2 certificate chains (excluding the root) as well as the signature/hash combinations that can be used when signing TLS 1.2 messages such as the ServerKeyExchange message and the CertificateVerify message.
The value in the (Default) location, NCRYPT_SCHANNEL_SIGNATURE_INTERFACE tells the server which signatures it can use to sign the ServerKeyExchange message and which signatures are allowed when verifying the server certificate chain.
These settings have nothing to do with disabling weak protocols or ciphers and should not be modifiedEVER!
I Just want the SCHANNEL Registry values to implement please.
If you are looking for just a quick list of SCHANNEL registry values to implement to help you pass a Security Scan/Audit here is an incredibly good list of values to implement to make sure the OS is not vulnerable to these older exploits.
You should use this as a guide for modifying the SCHANNEL protocols, list of default ciphers, and removing the weaker ones completely. It should be a favorite in your browser settings and currently be open in the tab right next to the one you are using to read this article
Regarding 3DES:
Sweet32 is a cryptographic attack against short block size (64-bit block) ciphers.
Vulnerability scanners will trigger this if a 3DES cipher suite is present. In Windows server, 3DES cannot be used as the only cipher but it is acceptable as an optional cipher suite for backward compatibility.
This is the minimum cipher in the negotiation list, so it is used only as a last resort.
TLS_RSA_WITH_3DES_EDE_CBC_SHA must not be offered on its own as it is considered inferior to the other cipher suites but should be offered for FIPS (Federal Information Processing Standards) constrained clients that do not have AES-based cipher suites available.
Microsoft also mitigates usage of this cipher by removing 3DES from available ciphers in the FalseStart list which prevents MiTM (Machine in the Middle) attack forcing encryption downgrade.
Vulnerability scanners should not be simply searching for registry keys indicating something is disabled (3DES). They should be reporting on configured Cipher Suites if they include 3DES.
Lucky Thirteen vulnerability mitigation
Disabling TLS 1.0 entirely.
The removal of all cipher block chaining (CBC) ciphers. EXAMPLE – TLS_RSA_WITH_AES_256_CBC_SHA256
There are a couple of CBC ciphers that are still supported in Windows 10
I made all the changes to the SChannel registry values, and even rebooted my server but some endpoints are still showing as vulnerable when I run my security scanning software again. Why did this not fix all my problems?
Keep in mind that Microsoft is not the only TLS implementation on the scene. Java and OpenSSL are just a couple of third-party SSL/TLS implementations that do not leverage the Microsoft SCHANNEL Security Support Provider Interface (SSPI) at all. If you have implemented the above registry values and rebooted the server and the scanning tool is still showing a vulnerability it is time to start thinking that this may not be an application that is using Microsoft implementation of SSL/TLS. To investigate this:
The first thing to do is look at your scan report and determine what network port or ports the scanning tool is indicating are still vulnerable.
On the computer being reported as vulnerable open an elevated command prompt and type: NetStat –ANOB > %ComputerName%_Netstat.txt.
Once it is done, then you can open the text file created, and search for the port determined from step 1.
It will give you the process name that is listening on that port. If it is Java.exe/Javaw.exe or OpenSSL.exe then this is not something Microsoft support is going to be able to help with. We will redirect you to the vendor of your 3rd party application.
If this is the case, you will need to contact those vendors to get those configured applications configured properly. Enabling verbose SCHANNEL logging may also help you determine what third party SCHANNEL applications are installed on your servers by configuring verbose logging. Verbose logging will show successful and failing connections providing the protocol and ciphers being used in addition to the computer from which the connection is coming from:
You should also be aware that Intune policy can also be leveraged to manage cipher suites as well. These settings may interfere with your SCHANNEL policies and configurations.
I trust you have found this content both illuminating and enjoyable in your efforts to secure your SCHANNEL environment without sacrificing the necessary functionality. If you should you encounter any hurdles along the way, please don’t hesitate to reach out to us for assistance. We’re here to support your continued success with Windows. Happy Hunting!
Jim “How I learned to stop worrying and ♥ Crypto” Tierney
This article is contributed. See the original author and article here.
Listen to your employees, monitor their engagement, and understand the pulse of your organization better than ever before by using Network Analytics in Viva Engage. Network Analytics provides an at-a-glance overview of your organization’s top engagement trends across the entire network. This includes employee sentiment, cross-community insights and AI-powered conversation summarization to help you stay-up-to-date with all the activity happening in your network. Network admins and those assigned a corporate communicator role will be able to access these advanced analytics. In order to access Network Analytics, users must have a Viva suite or Employee Communications and Communities (C&C) license.
Gone are the days of manually searching for the most engaging conversations across your network or trying to tally up the most mentioned themes and hashtags. With Network Analytics, you can see detailed metrics that show you exactly where conversations are taking place, which themes employees are most passionate about, how effective announcements are, and which communities are most active.
Best Practices
Review top themes and top conversations – we’ve made triaging these conversations across your entire organization easier than ever. Now you can deep dive into the conversations that are occurring within your organization and quickly review themes related to the most critical commentary.
Network analytics helps you easily identify themes, trends, and engagement across the network.
Understand the effectiveness of broad communications within your organization by analyzing the announcements breakdown. You can also review which leaders and employees are most active on Engage by reviewing the Frequent Contributors panel. Acknowledge these employees directly from Network Analytics by praising their contributions to the organization.
Finally, if you’d like to review which Communities are implementing best practices, look no further than the popular communities table. Here you can sort communities by those with the most posts, or most active members. Understanding which community rituals are leading to high engagement can be a great way to pass along helpful tips to other Community admins.
Get started today!
To access Network Analytics, select the global analytics entry point (at the top of the web browser) and click on the “Network analytics” tab:
If you cannot see the tab, confirm that you have either the network admin or corporate communicator role assigned to your user profile on Viva Engage. If you need to be assigned as corporate communicator, contact your network admin to help you gain access to the role.
New! Employee retention analysis – we’ll help you understand how employees who use Engage are more likely to be retained at your organization. The Viva Engage employee retention metric in Network Analytics shows the difference in the 28-day employee retention rates of employees who do and don’t use Viva Engage. Learn more about our retention analysis here Viva Engage Employee Retention – Microsoft Support
Resources
Watch the recording of the Deep Dive Webinar! Demos and lots of Q&A shared during the webinar as well!
How is sentiment analysis determined? Sentiment analysis is a Viva Engage premium feature that aggregates data across Viva Engage conversations to surface trends. To understand more, see Sentiment and theme analysis in Viva Engage – Microsoft Support
Who has access to view and manage network analytics? Access to the data in this dashboard is restricted to include only network admins and corporate communicators. These users can change settings via the Engage admin center.
What admin controls are available? Can analytics features be turned off? Yes, we provide the network admin and corporate communicator roles the ability to adjust which analytics features are enabled within the admin center.
What licensing requirements need to be met? Network analytics is only available to Viva Suite or Employee Communications and Communities licensed users.
How often is data refreshed? Analytics are refreshed daily. If you don’t see changes reflected immediately, check analytics the next day.
Recent Comments