Throttling and Blocking Email from Persistently Vulnerable Exchange Servers to Exchange Online

Throttling and Blocking Email from Persistently Vulnerable Exchange Servers to Exchange Online

This article is contributed. See the original author and article here.

As we continue to enhance the security of our cloud, we are going to address the problem of email sent to Exchange Online from unsupported and unpatched Exchange servers. There are many risks associated with running unsupported or unpatched software, but by far the biggest risk is security. Once a version of Exchange Server is no longer supported, it no longer receives security updates; thus, any vulnerabilities discovered after support has ended don’t get fixed. There are similar risks associated with running software that is not patched for known vulnerabilities. Once a security update is released, malicious actors will reverse-engineer the update to get a better understanding of how to exploit the vulnerability on unpatched servers.


Microsoft uses the Zero Trust security model for its cloud services, which requires connecting devices and servers to be provably healthy and managed. Servers that are unsupported or remain unpatched are persistently vulnerable and cannot be trusted, and therefore email messages sent from them cannot be trusted. Persistently vulnerable servers significantly increase the risk of security breaches, malware, hacking, data exfiltration, and other attacks.


We’ve said many times that it is critical for customers to protect their Exchange servers by staying current with updates and by taking other actions to further strengthen the security of their environment. Many customers have taken action to protect their environment, but there are still many Exchange servers that are out of support or significantly behind on updates.


Transport-based Enforcement System


To address this problem, we are enabling a transport-based enforcement system in Exchange Online that has three primary functions: reporting, throttling, and blocking. The system is designed to alert an admin about unsupported or unpatched Exchange servers in their on-premises environment that need remediation (upgrading or patching). The system also has throttling and blocking capabilities, so if a server is not remediated, mail flow from that server will be throttled (delayed) and eventually blocked.


We don’t want to delay or block legitimate email, but we do want to reduce the risk of malicious email entering Exchange Online by putting in place safeguards and standards for email entering our cloud service. We also want to get the attention of customers who have unsupported or unpatched Exchange servers and encourage them to secure their on-premises environments.


Reporting


For years, Exchange Server admins have had the Exchange Server Health Checker, which detects common configuration and performance issues, and collects useful information, including which servers are unsupported or unpatched. Health Checker can even create color-coded HTML reports to help you prioritize server remediation.


We are adding a new mail flow report to the Exchange admin center (EAC) in Exchange Online that is separate from and complementary to Health Checker. It provides details to a tenant admin about any unsupported or out-of-date Exchange servers in their environment that connect to Exchange Online to send email.


Figure 1 below shows a mockup of what the new report may look like when released:


VulnServ01.jpg


The new mail flow report provides details on any throttling or blocking of messages, along with information about what happens next if no action is taken to remediate the server. Admins can use this report to prioritize updates (for servers that can be updated) and upgrades or migrations (for servers that can’t be updated).


Throttling


If a server is not remediated after a period of time (see below), Exchange Online will begin to throttle messages from it. In this case, Exchange Online will issue a retriable SMTP 450 error to the sending server which will cause the sending server to queue and retry the message later, resulting in delayed delivery of messages. In this case, the sending server will automatically try to re-send the message. An example of the SMTP 450 error is below:


450 4.7.230 Connecting Exchange server version is out-of-date; connection to Exchange Online throttled for 5 mins/hr. For more information see https://aka.ms/BlockUnsafeExchange.


The throttling duration will increase progressively over time. Progressive throttling over multiple days is designed to drive admin awareness and give them time to remediate the server. However, if the admin does not remediate the server within 30 days after throttling begins, enforcement will progress to the point where email will be blocked.


Blocking


If throttling does not cause an admin to remediate the server, then after a period of time (see below), email from that server will be blocked. Exchange Online will issue a permanent SMTP 550 error to the sender, which triggers a non-delivery report (NDR) to the sender. In this case, the sender will need to re-send the message. An example of the SMTP 550 error is below:


550 5.7.230 Connecting Exchange server version is out-of-date; connection to Exchange Online blocked for 10 mins/hr. For more information see https://aka.ms/BlockUnsafeExchange.


Enforcement Stages


We’re intentionally taking a progressive enforcement approach which gradually increases throttling over time, and then introduces blocking in gradually increasing stages culminating in blocking 100% of all non-compliant traffic.


Enforcement actions will escalate over time (e.g., increase throttling, add blocking, increase blocking, full blocking) until the server is remediated: either removed from service (for versions beyond end of life), or updated (for supported versions with available updates).


Table 1 below details the stages of progressive enforcement over time:


VulnServ02.jpg


Stage 1 is report-only mode, and it begins when a non-compliant server is first detected. Once detected, the server will appear in the out-of-date report mentioned earlier and an admin will have 30 days to remediate the server.


If the server is not remediated within 30 days, throttling will begin, and will increase every 10 days over the next 30 days in Stages 2-4.


If the server is not remediated within 60 days from detection, then throttling and blocking will begin, and blocking will increase every 10 days over the next 30 days in Stages 5-7.


If, after 90 days from detection, the server has not been remediated, it reaches Stage 8, and Exchange Online will no longer accept any messages from the server. If the server is patched after it is permanently blocked, then Exchange Online will again accept messages from the server, as long as the server remains in compliance. If a server cannot be patched, it must be permanently removed from service.


Enforcement Pause


Each tenant can pause throttling and blocking for up to 90 days per year. The new mail flow report in the EAC allows an admin to request a temporary enforcement pause. This pauses all throttling and blocking and puts the server in report-only mode for the duration specified by the admin (up to 90 days per year).


Pausing enforcement works like a pre-paid debit card, where you can use up to 90 days per year when and how you want. Maybe you need 5 days in Q1 to remediate a server, or maybe you need 15 days.  And then maybe another 15 days in Q2, and so forth, up to 90 days per calendar year.


Initial Scope


The enforcement system will eventually apply to all versions of Exchange Server and all email coming into Exchange Online, but we are starting with a very small subset of outdated servers: Exchange 2007 servers that connect to Exchange Online over an inbound connector type of OnPremises.


We have specifically chosen to start with Exchange 2007 because it is the oldest version of Exchange from which you can migrate in a hybrid configuration to Exchange Online, and because these servers are managed by customers we can identify and with whom we have an existing relationship.


Following this initial deployment, we will incrementally bring other Exchange Server versions into the scope of the enforcement system. Eventually, we will expand our scope to include all versions of Exchange Server, regardless of how they send mail to Exchange Online.


We will also send Message Center posts to notify customers. Today, we are sending a Message Center post to all Exchange Server customers directing them to this blog post. We will also send targeted Message Center posts to customers 30 days before their version of Exchange Server is included in the enforcement system. In addition, 30 days before we expand beyond mail coming in over OnPremises connectors, we’ll notify customers via the Message Center.


Feedback and Upcoming AMA


As always, we want and welcome your feedback. Leave a comment on this post if you have any questions or feedback you’d like to share.


On May 10, 2023 at 9am PST, we are hosting an “Ask Microsoft Anything” (AMA) about these changes on the Microsoft Tech Community.  We invite you to join us and ask questions and share feedback. This AMA will be a live text-based online event with no audio or video. This AMA gives you the opportunity to connect with us, ask questions, and provide feedback. You can register for this AMA here.


FAQs


Which cloud instances of Exchange Online have the transport-based enforcement system?
All cloud instances, including our WW deployment, our government clouds (e.g., GCC, GCCH, and DoD), and all sovereign clouds.


Which versions of Exchange Server are affected by the enforcement system?
Initially, only servers running Exchange Server 2007 that send mail to Exchange Online over an inbound connector type of OnPremises will be affected. Eventually, all versions of Exchange Server will be affected by the enforcement system, regardless of how they connect to Exchange Online.


How can I tell if my organization uses an inbound connector type of OnPremises?
You can use Get-InboundConnector to determine the type of inbound connector in use. For example, Get-InboundConnector | ft Name,ConnectorType will display the type of inbound connector(s) in use.


What is a persistently vulnerable Exchange server?
Any Exchange server that has reached end of life (e.g., Exchange 2007, Exchange 2010, and soon, Exchange 2013), or remains unpatched for known vulnerabilities. For example, Exchange 2016 and Exchange 2019 servers that are significantly behind on security updates are considered persistently vulnerable.


Is Microsoft blocking email from on-premises Exchange servers to get customers to move to the cloud?
No. Our goal is to help customers secure their environment, wherever they choose to run Exchange. The enforcement system is designed to alert admins about security risks in their environment, and to protect Exchange Online recipients from potentially malicious messages sent from persistently vulnerable Exchange servers.


Why is Microsoft only taking this action against its own customers; customers who have paid for Exchange Server and Windows Server licenses?
We are always looking for ways to improve the security of our cloud and to help our on-premises customers stay protected. This effort helps protect our on-premises customers by alerting them to potentially significant security risks in their environment. We are initially focusing on email servers we can readily identify as being persistently vulnerable, but we will block all potentially malicious mail flow that we can.


Will Microsoft enable the transport-based enforcement system for other servers and applications that send email to Exchange Online?
We are always looking for ways to improve the security of our cloud and to help our on-premises customers stay protected. We are initially focusing on email servers we can readily identify as being persistently vulnerable, but we will block all potentially malicious mail flow that we can.


If my Exchange Server build is current, but the underlying Windows operating system is out of date, will my server be affected by the enforcement system?
No. The enforcement system looks only at Exchange Server version information.  But it is just as important to keep Windows and all other applications up-to-date, and we recommend customers do that.


Delaying and possibly blocking emails sent to Exchange Online seems harsh and could negatively affect my business. Can’t Microsoft take a different approach to this?
Microsoft is taking this action because of the urgent and increasing security risks to customers that choose to run unsupported or unpatched software. Over the last few years, we have seen a significant increase in the frequency of attacks against Exchange servers. We have done (and will continue to do) everything we can to protect Exchange servers but unfortunately, there are a significant number of organizations that don’t install updates or are far behind on updates, and are therefore putting themselves, their data, as well as the organizations that receive email from them, at risk. We can’t reach out directly to admins that run vulnerable Exchange servers, so we are using activity from their servers to try to get their attention. Our goal is to raise the security profile of the Exchange ecosystem.


Why are you starting only with Exchange 2007 servers, when Exchange 2010 is also beyond end of life and Exchange 2013 will be beyond end of life when the enforcement system is enabled?
Starting with this narrow scope of Exchange servers lets us safely exercise, test, and tune the enforcement system before we expand its use to a broader set of servers. Additionally, as Exchange 2007 is the most out-of-date hybrid version, it doesn’t include many of the core security features and enhancements in later versions. Restricting the most potentially vulnerable and unsafe server version first makes sense.


Does this mean that my Exchange Online organization might not receive email sent by a 3rd party company that runs an old or unpatched version of Exchange Server?
Possibly. The transport-based enforcement system initially applies only to email sent from Exchange 2007 servers to Exchange Online over an inbound connector type of OnPremises. The system does not yet apply to email sent to your organization by companies that do not use an OnPremises type of connector. Our goals are to reduce the risk of malicious email entering Exchange Online by putting in place safeguards and standards for email entering the service and to notify on-premises admins that the Exchange server their organization uses needs remediating.


How does Microsoft know what version of Exchange I am running?  Does Microsoft have access to my servers?
No, Microsoft does not have any access to your on-premises servers. The enforcement system is based on email activity (e.g., when the on-premises Exchange Server connects to Exchange Online to deliver email).


The Exchange Team

How to perform Error Analysis on a model with the Responsible AI dashboard (Part 4)

How to perform Error Analysis on a model with the Responsible AI dashboard (Part 4)

This article is contributed. See the original author and article here.

Traditional performance metrics for machine learning models focus on calculations based on correct vs incorrect predictions.  The aggregated accuracy scores or average error loss show how good the model is, but do not reveal conditions causing model errors. While the overall performance metrics such as classification accuracy, precision, recall or MAE scores are good proxies to help you build trust with your model, they are insufficient in locating where in data the model has inaccuracies.  Often, model errors are not distributed uniformly in your underlying dataset.  For instance, if your model is 89% accurate, does that mean it is 89% fair as well? 


 


Model fairness and model accuracy are not the same thing and must be considered. Unless you take a deep dive in the model error distribution, it would be challenging to discover the different regions of your data for where the model is failing 42% of the time (see the red region in diagram below). The consequence of having errors in certain data groups can lead to fairness or reliability issues. To illustrate, the data group with the high number of errors may contain sensitive features such as age, gender, disabilities, or ethnicity. Further analysis could reveal that the model has a high error rate with individuals with disabilities compared to ones without disabilities.  So, it is essential to understand areas where the model is performing well or not, because the data regions where there are a high number of inaccuracies in your model may turn out to be an important data demographic you cannot afford to ignore.


 


ea-error-distribution.png


 


This is where the error analysis component of Azure Machine Learning Responsible AI (RAI) dashboard helps in identifying a model’s error distribution across its test dataset. In the last tutorial, we created an RAI dashboard with a diabetes hospital readmission classification model we trained. In this tutorial, we are going to explore how data scientists and AI developers can use Error Analysis to identify the error distribution in the test records and discover where there is a high error rate from the model. In addition, we’ll learn how to create cohorts of data to investigate why a model is performing poorly in some cohorts and not others.  Lastly, we will utilize the various methods available in the component for error identification: Tree map and Heat map.


 


Prerequisites


 


This is Part 4 of a tutorial series. You’ll need to complete the prior tutorial(s) below:


 




 


How to interpret Error Analysis insights


 


Before we start our analysis, let’s first understand how to interpret the data provided by the Tree map. The RAI dashboard illustrates how model failure is distributed across various cohorts with a tree visualization. The root node displays the total number of incorrect predictions from a model and the total test dataset size. The nodes are groupings of data (aka cohorts) that are formed by splits from feature conditions (e.g., “Time_In_Hospital < 5” vs “Time_In_Hospital ≥ 5”). Hovering the mouse over each node on the tree reveals the following information for the selected feature condition:


 


error nodes.png


 


 



  • Incorrect vs Correct predictions: The number of incorrect vs correct predictions for the datapoints that fall in the node.

  • Error Rate: represents the number of error occurrence in the node.  The shade of red shows what percentage of this node’s datapoints are receiving erroneous predictions. The darker the red the higher the error rate.

  • Error Coverage: represents how many of your model’s overall errors are happening in a given node. The fullness of the node shows the coverage of errors the node has. The fuller the node, the higher error coverage it has.


 


Identifying model errors from a Tree Map


 


Now let’s start our analysis. The tree map displays how model failure is distributed across various data cohorts. For our diabetes hospital readmission model, one of the first things we observe from the root node is that out of the 994 total test records, the error analysis component found 168 errors while evaluating the model. 


 


ea-treemap.png


 


The tree map provides visual indicators to make locating nodes or tree path with the error rate quicker.  In the above diagram, you can see the tree path with the darkest red color has a leaf-node on the bottom right-hand side of the tree. To select the path leading up to the node, double-click on the leaf node. This highlights the path and displays the feature condition for each node in the path.  Since this tree path contains nodes with the highest error rate, it is a good candidate to create a cohort with the data represented in the path in order to later perform analysis to diagnose the root cause behind the errors.


 


1-select-error-tree.png


 


According to this tree path with the highest error rate, diabetes patients that have prior hospitalization and taking several medications between 11 and 22 are a cohort of patients where the model has the highest number of incorrect predictions.  To investigate what’s causing the high error rate with this group of patients, we will create a cohort for these groups of patients.


 


Cohort # 1: Patients with number of Prior_Inpatient > 0 days and number of medications between 11 and 22


 


To save the selected path for further investigation. We can use the following steps:


 



  • Click on the “Save as a new cohort” button on the upper right-hand side of the error analysis component. Note:  The dashboard displays the “Filters” with the feature conditions in the path selection: num_medications 11.50, prior_inpatient > 0.00.

  • We’ll name the cohort:Err: Prior_Inpatient >0; Num_meds >11.50 & <= 21.50”.


 


1-save-error-tree.png


 


As much as it’s advantageous in finding out why the model is performing poorly, it is equally important to figure out what’s causing our model to perform well.  So, we’ll need to find the tree path with the least number of errors to gain insights as to why the model is performing better in this cohort vs others. The leaf node with the feature condition on the far left-hand side of the tree, is the path of the tree with the least errors. 


 


1-select-least-error-tree.png


 


The tree reveals that diabetic patients with no prior hospitalization, the number of other health conditions equal or less than 7, and the number of lab procedures equal or less than 57 are a cohort with the lowest model errors.  To analyze the factors that are contributing to this cohort performing better than others, we’ll create a cohort for these group of patients.


 


Cohort # 2: Patients with number of Prior_Inpatient = 0 days and number of diagnoses ≤ 7 and number of lab procedures ≤ 57


 


For comparison, we will create a cohort of feature condition with the lowest error rate.  To achieve this, complete the following steps:



  • Double-click on the node to select the rest of the nodes in the tree path.

  • Click on “Save as a new Cohort” button to save the selected path in a cohort. Note:  The dashboard displays the “Filters” with the feature conditions in the path selection: num_lab_procedures <= 56.50, number_diagnoses <= 6.50, prior_inpatient <= 0.00.

  • We’ll name the cohort: Prior_Inpatient = 0; num_diagnoses <= 6.50; lab_procedures <= 56.50


When we start investigating model inaccuracies, comparing the different features between top and bottom performing cohorts will be useful for improving our overall model quality (we’ll see this later in the next tutorial Part 5.  Stay tuned).


 


Discovering model errors from the Feature list


 


One of the advantages of using the RAI dashboard to debug a model is that it provides the “Feature List” pane, which is a list of feature names in the test dataset that are error contributors (included in the creation of your error tree map). The list is sorted based on the contribution of the features to the errors. The higher a feature is on this list, the higher its contribution importance to your model errors. Note: Not to be confused with the “Feature Importance” section that will later be described in tutorial Part 7 (which explains what features have contributed the most to model predictions). This sorted list is vital to know the problematic features that are causing issues with the model’s performance.  It is also an opportunity to check if sensitive features such as age, race, gender, political view, religion, etc. are among top error contributors.   This is an indicator to examine if your model encounters potential fairness issues.


 


1-view-feature-list.png


 


 


In our Diabetes Hospital Readmission model, the “Feature List” indicates the following features to be among the top contributors of the model’s errors:


 



  • Age

  • num_medications

  • medicare

  • time_in_hospital

  • num_procedures

  • insulin

  • discharge_destination


 


Although, “Age” is a sensitive feature, we must check if there is a potential age bias with the model having a high inaccuracy with this feature. In addition, you may have noticed that not all the features on this list appeared on the Tree map nodes. The user can control how granular or high-level tree map should be displayed the error contributors, from the “Feature List” pane:


 



  • Maximum depth: controls how tall the error tree should be. Meaning the maximum number of nodes that can be displayed from the root node to the leaf node (for any branch)

  • Number of leaves: the total number of features with errors from the trained model. (e.g., 21 is the number of features highlighted on the bar to show the level of error contribution from the list)

  • Minimum number of samples in one leaf: controls the threshold for the minimum number of data samples to create one leaf.


 


Try adjusting the control levels for the minimum number of samples in one leaf field to different values between 1 and 100 to see how the tree expands or shrinks. If you want to see a more granular breakdown of the errors in your dataset, you should reduce the level for the minimum number of samples in one leaf field.


 


Investigating Model Errors using the Heat map


 


The Heat map is another visualization functionality that enables users to investigate the error rate through filtering by one or two features to see where most of the errors are concentrated. This helps you determine which areas to drill down further so you can start forming hypotheses of where the errors are originating.


 


From the Feature List, we saw that “Age” was the top contributor of the model inaccuracies.  So, we’re going to use the Heat map to see which cohorts within the “Age” feature are driving high model errors.


 


ea-heatmap-age.png


 


Under the Heat map tab, we’ll select “Age” in the “Rows: Feature 1” drop-down menu to see its influence in the model’s errors. The dashboard has a built-in intelligence to divide the feature into different cells with the possible data cohorts with the Age feature (e.g., “Over 60 years”, “30–60 years” and “30 years or younger”). By hovering over each cell, we can see the number of correct vs incorrect predictions, error coverage and error rate for the data group represented in the cell. Here we see:


 



  • The cell with “Over 60 years” has 536 correct and 126 incorrect model predictions. The error coverage is 73.81%, and error rate 18.79%. This means that out of 168 total incorrect predictions that the model made from the test data, 126 of the incorrect predictions came from “Age==Over 60 years”.

  • Even though the error rate of 18.79% is low, an error coverage of 73.81% is a huge number. That means a majority of the model’s inaccuracies come from data where patients are older than 60 years old. This is problematic.


 


hm-elder-metrics.png


 



  • The cell with “30–60 years” has 273 correct and 25 incorrect model predictions. The error coverage is 25.60%, and error rate 13.61%. Even though, the patients with “Age==30–60 years” have a very low error rate, the error coverage of 25.60% is a quarter of all the model’s error, which is an issue.

  • The cell with “30 years or younger” has 17 correct and 1 incorrect model predictions. The error coverage is 0.60%, and error rate 5.56%. Having 1 incorrect model prediction is insignificant. Plus, both the error coverage and error rate are low. It’s safe to say the model is performing very well in this cohort, however we must also consider that its total data size of 18 is a very small sample size.


Since our observation shows that Age plays a significant role in the model’s erroneous predictions, we are going to create cohorts for each age group for further analysis in the next tutorial.


 


Cohort #3: Patients with “Age == Over 60 years”


 


Similar to the Tree map, create a cohort from the Heat map by taking the following steps:


 



  1. Click on the “Over 60 years” cell. You’ll see a blue border around the square cell.

  2. Next, click on the “Save as a new cohort” button at the top right-hand corner of the Error Analysis section. A new pane will pop-up with a summary of new cohort, which includes error coverage, error rate, correct/incorrect prediction, total data size, and filters in the data feature.

  3. In the “Cohort name” box, enter “Age==Over 60 years”.

  4. Then click on the “Save” button to create the cohort.

  5. To deselect the cell, click on it again or click on the “Clear all” button.


 


ea-save-heatmp.png


 


Repeat the steps to create a cohort for each of the other two Age cells: 



  • Cohort #4: Patients with “Age == 30–30 years”

  • Cohort #5: Patients with “Age <= 30 years”


If Age is playing a major role in why the model is performing poorly, we can conduct further analysis to better understand this cohort and evaluate if it has an impact in patient returning to the hospital within 30 days or not.


 


Managing cohorts


 


To view or manage all the cohorts you’ve created, click on the “Cohort Settings” gear icon on the upper right-hand corner of the Error Analysis section. In addition, the RAI dashboard creates a cohort called “All data” by default. This cohort contains all the test datasets used to evaluate the model.


 


3-ea-cohort-list.png


 


Conclusion


 


As we have witnessed from using the Tree map, Feature List, and Heat map, the RAI dashboard provides multiple avenues of identifying features causing a model to be erroneous. Although, simply knowing which features are causing the error is not enough. It is beneficial for data scientists or AI developers to understand the number and magnitude of errors a feature has, when debugging a model. The dashboard helps in the process of elimination by pinpointing the error regions by providing the feature conditions to focus on as well as the number of correct/incorrect predictions, error coverage and error rates. This helps in measuring the influence the feature condition errors have on the overall model errors.


 


Discovering the correlation and dependencies between features helps in creating cohorts of data to investigate. Along with exploring the cohorts with the most errors, using the “Feature list” in conjunction with our investigations helps in understanding exactly which features are problematic.  Since with found “Age” to be a top contributor on the “Feature List” and the Heat map also shows a high error coverage for diabetic patients that are Over 60 years in age, we can start forming a hypothesis that there may be an age bias with the model.  We have to consider that given our use case; age plays a role in diabetic cases.  Next, the Tree map enabled us to create data cohort where a model has high vs low inaccuracies.  We found that diabetic patients with prior hospitalization were one of the features in the cohort with the highest error rate.  On the other hand, patients with no prior hospitalization were one of the features in the cohort with the least error rate.


 


As a guide, use error analysis when you need to:


 



  • Gain a deep understanding of how model failures are distributed across a dataset and across several input and feature dimensions.

  • Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps.


Awesome! Now…we’ll move on to the “Model Overview” section of the dashboard to start analyzing our cohorts and diagnosing issues with our model.


 


Stay tuned for Part 5 of the next tutorial…


 

Windows monthly updates explained

This article is contributed. See the original author and article here.


Windows updates keep you protected and productive in different ways, and we continue to optimize the update experience. Whether you’re an IT administrator or a general user, Windows monthly updates provide you with the security fixes to help keep your devices protected—as well as enhancements based on your feedback. Monthly updates are cumulative and include all previously released fixes to guard against fragmentation of the operating system (OS). This contributes to the reliability and quality of the Windows platform.


This post summarizes the different types of monthly updates and shares insights on how we’ve optimized our approach to Windows servicing and delivery.



Monthly security update release


For many of you, Update Tuesday (also referred to as “Patch Tuesday”) is a regular part of Windows servicing. Published on the second Tuesday of each month, our security update releases are cumulative. That is, they include both new and previously released security fixes along with non-security content introduced in the prior month’s optional non-security preview release (see below). These updates help keep Windows devices secure and compliant by deploying stability fixes and addressing security vulnerabilities.


 









Note: People tend to use “B release,” quality update, security update, and LCU interchangeably.



Monthly security updates are mandatory and are available through our standard channels, which include Windows Update, Windows Update for Business, Microsoft Intune, Microsoft Configuration Manager, Windows Server Update Services (WSUS), and the Microsoft Update Catalog.


Optional non-security preview release


You’ve got options with optional non-security preview releases. Available the fourth week of the month, these production-quality updates are released ahead of the planned security update release for the following month. In addition, new features, like Search highlights, may initially be deployed in the prior month’s optional non-security preview release, then ship broadly in the following month’s security release.


 









Note: The term “optional non-security preview release” now replaces what we used to call either a “C” or “D” release to align with the current process.



Optional non-security preview releases are also cumulative and are only offered for the most recent supported versions of Windows.


Starting in April 2023, we now target optional non-security preview releases for the fourth week of the month. We have found this to be the optimal time for us to publish and for you to consume these updates. That’s two weeks after your latest monthly security update and about two weeks before you’ll see these features become part of the next mandatory cumulative update. We’re excited for this improvement as it is meant to optimize the validation of payloads, improve consistency, and enhance the predictability of your testing, update, and upgrade experience.


To access optional non-security preview releases, navigate to Settings > Windows Update > Advanced options > Optional updates, select from the available updates, and click Download and install.


Out-of-band releases


Out-of-band (OOB) releases may be provided to fix a recently identified issue or vulnerability. They are used in atypical cases, such as security vulnerabilities or a quality issue, when devices should be updated immediately instead of waiting for the next monthly quality update release. Out-of-band releases are cumulative, meaning that they include the updates from the previous security and/or non-security release, as well as the additional fix.


Continuous innovation in Windows 11


Beginning with Windows 11, version 22H2, new features and enhancements are delivered to the most recently released in-market version of Windows 11 more frequently using servicing technology. As with all updates, we utilize a phased and measured approach in rolling out continuous innovation to the Windows 11 ecosystem.


Experiences may be introduced in an optional non-security preview release prior to being made available broadly via a monthly security update or via Controlled Feature Rollout (CFR) technology. For more information on how to control when select features introduced via servicing are released to the devices you manage, see Commercial control for continuous innovation.


Recommendations


As a general practice, we recommend that you update your devices as soon as possible, whether you’re a general user or an IT professional. For IT admins, we also recommend taking advantage of the optional non-security preview releases to internally validate releases ahead of the following month’s security update release.


To help manage updates across your organization, bookmark these resources:




These pages are available in multiple languages and refer to each release by a unique KB number.


IT admins may validate fixes and features in a preview release by leveraging the Windows Insider Program for Business or via the Microsoft Update Catalog.


If you are a Microsoft Partner or registered commercial customer, you can also take advantage of the Security Update Validation Program (SUVP). It’s a quality assurance testing program designed for the monthly security update release. As a SUVP partner, you can start testing these security updates three weeks prior to Update Tuesday and provide us with feedback regarding usability, bug reports, test reports, etc.


For additional tips, read Ensuring a successful Windows quality update experience.




Continue the conversation. Find best practices. Bookmark the Windows Tech Community and follow us @MSWindowsITPro on Twitter. Looking for support? Visit Windows on Microsoft Q&A.

“Topics is Engaged” ? – The Intrazone podcast

“Topics is Engaged” ? – The Intrazone podcast

This article is contributed. See the original author and article here.

Viva Topics in Engage doubles the impact and access of your knowledge at your fingertips – and how you pay it forward to others.


 


On today’s episode, we hear from Raj Jain (Principal product manager – Viva Engage and Answers team at Microsoft) about all things topical about Viva Topics, specifically – the role of Viva Topics within your Viva Engage community discussions, questions, and announcements posts. The real value gives you a built-in knowledge management system that balances and refines the length of your internal communications without sacrificing the depth you pay forward to each person that reads your comm.


 


Two Viva apps, one great outcome. 


 


The Intrazone, episode 94:


https://html5-player.libsyn.com/embed/episode/id/26274723/height/50/theme/standard/thumbnail/no/direction/backward/menu/no/


Subscribe to The Intrazone podcast + show links and more below.


 


The Intrazone guest: Raj Jain - Principal Product Manager - Viva Engage and Answers team.The Intrazone guest: Raj Jain – Principal Product Manager – Viva Engage and Answers team.


Links to important on-demand recordings and articles mentioned in this episode:  


 



 


Subscribe today!


Thanks for listening! If you like what you hear, we’d love for you to Subscribe, Rate and Review on iTunes or wherever you get your podcasts.


 


Be sure to visit our show page to hear all episodes, access the show notes, and get bonus content. And stay connected to the SharePoint community blog where we’ll share more information per episode, guest insights, and take any questions or suggestions from our listeners and SharePoint users (TheIntrazone@microsoft.com).



Intrazone Links



+ Listen to other Microsoft podcasts at aka.ms/microsoft/podcasts.


 


The Intrazone, a show about the Microsoft 365 intelligent intranet (aka.ms/TheIntrazone).The Intrazone, a show about the Microsoft 365 intelligent intranet (aka.ms/TheIntrazone).


 

Transform your business with automated insights & optimized workflows using Azure OpenAI GPT-3

Transform your business with automated insights & optimized workflows using Azure OpenAI GPT-3

This article is contributed. See the original author and article here.

OpenAI’s GPT-3 AI applications have become a buzzword in the industry. If you’re looking to boost your business operations and maximize productivity but are encountering technical barriers and resource limitations, the app highlighted in this article may be your answer.
It smoothly integrates Azure OpenAI into various business workflows, showcasing a spectrum of AI-powered demos. This demo highlight the technology’s capabilities and can help streamline your business operations while optimizing productivity.


 


SAVITAMITTAL_4-1679241206110.png


Let’s build your business process with OpenAI


Resources 




  • Azure OpenAI – Summarization & gain insights



  • Power Apps – To build front end

  • Power Automate – To build the process


Getting Started


1. Azure Open AI – Summarization & Gain insights



SAVITAMITTAL_0-1679082782817.png


3. Create 2 SharePoint Lists 



  • Prompts for your custom questions (change questions & prompt types as per your business requirements)

  • Conversation insights to save Open AI generated insights from the email text (change column names as per your business requirements)


SAVITAMITTAL_7-1679086505620.png


 


Scenario 1: Lets start with conversation insights. Assume you have an email enabled conversation. You want to get some insights from the email text.


To generate insights, PowerAutomate is used and then results are saved in SharePoint.

3. Create a PowerAutomate




  • Here are the steps to create a Power Flow:



    1. Go to flow.microsoft.com and click on “Flows”.

    2. Visit https://make.powerautomate.com/

    3. Click “New Flow” and name it “OpenAI-Insights“.

    4. It has many variables for multiple prompts at once and save insights from one conversation in SharePoint List.

    5. Build the entire flow according to your requirements.SAVITAMITTAL_1-1679083168954.png

    6. When a new email arrives-Choose email trigger if email’s conversation you want to use as a source to gain insights. Else you can use any other trigger depends upon where your conversations stores. e.g. Teams chat, SharePoint, blob storage or any other storage solution.

    7. HtmlToText-Get email body as a plain text using html to text connector.

    8. Get items– SharePoint Connector to get the prompts list to run on your email text to gain additional insights

    9. Initialize Variables-# of variables are initialized based on number of prompts

    10. initialize a variable called summary to store output.

    11. Apply to each -OPEN AI step loop through each prompt question on original conversation text to get the insight and save the result via case statement in the initialized variable.
      HTTP Connector

      Method: POST
      URI: https://resourcename.openai.azure.com/openai/deployments/davinci003/completions?api-version=2022-12-01
      Headers content-type:application/json
      api-key:  [Azure Portal -> OpenAI resource -> Keys & Endpoints]
      Body   {
      “prompt”: @{variables(‘promptPhrase’)},
      “max_tokens”: 1000,
      “temperature”: 1
      }

      SAVITAMITTAL_0-1679091327420.png
      ParseOpenAIOutput-Parse the response from OpenAI HTTP output.
      Click Generate from sample in Parse step and paste the below sample json to parse the response from HTTP output


      {
      “body”: {
      “id”: “cmpl-xxxxxxx”,
      “object”: “text_completion”,
      “created”: 1678909613,
      “model”: “text-davinci-003”,
      “choices”: [
      {
      “text”: “nThe main reason of the conversation is to give credit to travel company for their gracious refund of the cost of the no-show.”,
      “index”: 0,
      “finish_reason”: “stop”,
      “logprobs”: null
      }
      ],
      “usage”: {
      “completion_tokens”: 27,
      “prompt_tokens”: 91,
      “total_tokens”: 118
      }
      }
      }


      SAVITAMITTAL_0-1679199886668.pngAfter parsing we need to loop the array and assign the text to the variable
      Apply to each action. Select Choices from parse step output as the array property.
      Set  Output – variable “Summary” 

      Switch: It has 6 case actions based on number of prompts to set the each variable based on each HTTP Post call.


      SAVITAMITTAL_1-1679200233800.png

       


       end of Apply to each -OPEN AI step #11 (loop through each prompt question and call OpenAI endpoint to get insights. Parse the response and save it in each prompt related variable)
       



    12. Create Item – Once all the variables are set then create an entry in the SharePoint Conversation insights list with original text and additional insightsSAVITAMITTAL_3-1679200897367.png




SAVITAMITTAL_2-1679202037897.png


How to utilize insights into a process
PowerApps can be created with SharePoint list to create a business process around insights generated by OpenAI on each customer conversation.
Dashboard->Details screen-> Process each conversation with Insights
https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/app-from-sharepoint



 Dashboard – Vertical galley in PowerApps with SharePoint list ‘Conversation Insights’ as a Data Source 


SAVITAMITTAL_1-1679201855711.png                                   
 Item details page with OpenAI insights to accelerate the customer service – Display form in PowerApps


 


SAVITAMITTAL_3-1679202135976.png


 


Stay tuned for more exciting blog content as we explore various potential scenarios.
Effortlessly extract text from documents, audio, and video files to generate valuable insights.