Automatic extension upgrade now provides high availability to Arc-enabled servers during upgrades

Automatic extension upgrade now provides high availability to Arc-enabled servers during upgrades

This article is contributed. See the original author and article here.

The Azure Arc team is excited to announce generally availability of Automatic VM extension upgrades for Azure Arc-enabled servers. VM extensions allow customers to easily include additional capabilities on their Azure Arc-enabled servers. Extension capabilities range from collecting log data with Azure Monitor to extending your security posture with Azure Defender to deploying a hybrid runbook worker on Azure Automation. Over time, these VM extensions get updated with security enhancements and new functionality. Maintaining high availability of these services during these upgrades can be challenging and a manual task. The complexity only grows as the scale of your service increases. 


 


With Automatic VM extension upgrades, extensions are automatically upgraded by Azure Arc whenever a new version of an extension is published. Auto extension upgrade is designed to minimize service disruption of workloads during upgrades even at high scale and to automatically protect customers against zero-day & critical vulnerabilities.  


 


How does this work?


Gone are the days of manually checking for and scheduling updates to the VM Extensions used by your Azure Arc-enabled servers. When a new version of an extension is published, Azure will automatically check to see if the extension is installed on any of your Azure Arc-enabled servers. If the extension is installed, and you’ve opted into automatic upgrades, your extension will be queued for an upgrade.


The upgrades across all eligible servers are rolled out in multiple iterations where each iteration contains a subset of servers (about 20% of all eligible servers). Each iteration has a randomly selected set of servers and can contain servers from one or more Azure regions. During the upgrade, the latest version of the extension is downloaded to each server, the current version is removed, and finally the latest version is installed. Once all the extensions in the current phase are upgraded, the next phase will begin. If upgrade fails on any of the VM, then rollback to previous stable extension version is triggered immediately. This will remove the extension and install the last stable version of the extension. This rolled back VM is then included in the next phase to retry upgrade. You’ll see an event in the Azure Activity Log when an extension upgrade is initiated.


 


How do I get started?


No user action is required to enable automatic extension upgrade. When you deploy an extension to your server, automatic extension upgrades will be enabled by default. All your existing ARM templates, Azure Policies, and deployment scripts will honor the default selection. You however will have an option to opt-out during or any time after extension installation on the server. 
 


After an extension installation, you can verify if the extension is enabled for automatic upgrade by looking for the status under “Automatic upgrade status” column in Azure Portal. Azure Portal can also be used to opt-in or opt-out of auto upgrades by first selecting the extensions using checkboxes and then by clicking on the “Enable Automatic Upgrade” or “Disable Automatic Upgrade” buttons respectively. 


 


tanmaygore_0-1666121331861.png


 


You can also use Azure CLI and Azure PowerShell to view the auto extension upgrade status and to opt-in or opt-out. You can learn more about this using our Azure documentation.


 


What extensions & regions are supported?


Limited set of extensions are currently supported for Auto extension upgrade. Extensions not yet supported for auto upgrade will have status as “Not supported” under the “Automatic upgrade status” column. You can also refer Azure documentation for complete list of supported extensions


All public azure regions are currently supported. Arc enabled Servers connected to any public azure region are eligible for automatic upgrades. 


 


Upcoming enhancements


We will be gradually supporting many more extensions available on Arc enabled Servers. 

Making Search Better Within Microsoft

Making Search Better Within Microsoft

This article is contributed. See the original author and article here.

The Challenge


Five years ago employee satisfaction with finding information within the company was very low. it was the lowest rated it service among all those we surveyed about. Related surveys done by other teams supported this, for instance that our software engineers “finding information” as one of the most wasteful frustrating activities in their job, costing the company thousands of man years of productivity.


 


A project team was formed to improve this. In the years since we have pursued:



  • Improving search result relevance

  • Improving search content completeness

  • Address content quality issues


 


The Microsoft Search Environment


Microsoft has >300,000 employees working around the globe, and collectively, our employees use or access many petabytes of content as they move through their workday. within our employee base, there are many different personas who have widely varying search interests and use hundreds of content sources. Those content sources can be file shares, Microsoft sharepoint sites, documents and other files, and internal websites. our employees also frequently access external websites, such as hr partners’ websites.


 


BillBaer_0-1666112751951.png


 


 


We began with user satisfaction survey net score at 87 (scale of 1-200, with 200 being perfect). We have reached satisfaction of 117. Our goal is 130+.


 


What We’ve Done


Core to our progress has been:



  1. Understanding the needs of the different personas around the company. At Microsoft, personas are commonly clustered based on three factors: their organization within the company, their profession, and their geographic location. For example, a Microsoft seller working in Latin America has different search interests than an engineer working in China.

    1. Has resulted in targeting bookmarks to certain security groups.

    2. Has led to outreach to certain groups and connecting with efforts they had underway to build a custom search portal or improve content discoverability.




 



  1. Understanding typical search behavior. For instance, the diagram below shows that a relatively small number of search terms account for a large portion of the search activity.


BillBaer_1-1666112751960.png




    1. We ensure bookmarks exist for most of the high frequency searches.

    2. We look for commonalities in low frequency searches for potential content to connect in.



 



  1. Improving content quality. This has ranged from deleting old content to educating content owners on most effective ways to adding metadata to their content so it ranks better in search results. As part of our partnership with this community, we provide reporting on measurable aspects of content quality. We are in early stages of pursuing quality improvement, with much to do in building a community across the company, measuring, and enabling metadata.


BillBaer_2-1666112751961.png


 


 BillBaer_3-1666112751963.png


 




    1. For those site owners actively using this reporting, we have seen a decrease of up to 70% in content with no recent updates.




  1. Utilizing improvements delivered in product, from improved relevance ranking to UX options like custom filters.

    1. We have seen steady improvement in result ranking.

    2. We also take advantage of custom filters and custom result KQL.

    3. We use Viva Topics. Topics now receive the most clicks after Bookmarks.





  1. Making our search coverage more complete. Whether it’s via bookmarks or connectors, there are many ways of making the search experience feel like it covers the entire company.

    1. We currently have 7 connections, one of which is custom built and brings in 10 different content sources. This content is clicked on in 5% of searches on our corporate SharePoint portal.

    2. About half of our bookmarks (~600) point to URLs outside of the corporate boundary, such as third-party hosted services.





  1. Analytics. Using SharePoint extensions, we capture all search terms and click actions on our corporate portal’s search page. We’ve used these extensively in deciding what actions to take. The sample below is a report on bookmarks and their usage. This chart alone enabled us to remove 30% of our bookmarks due to lack of use.


BillBaer_4-1666112751965.png


 


 


In analyzing the click activity on our corporate portal, the most impactful elements are:






















Bookmarks



Are clicked on in 45% of all searches and significantly shortens the duration of a search session.


We currently have ~1200 bookmarks making for quick discovery of the most commonly searched for content and tools around the company.



Topics



Are clicked on in 5-7% of all searches.



Connectors



Are clicked on in 4-5% of all searches.



Metadata



Good metadata typically moves an item from the bottom of the first page to the top half and from page 2 or later onto the bottom of page 1.



 


Additional details will be published in later blog posts. If of interest, details as to exactly what Microsoft search admin does in its regular administrative activities are described here.


 


Business Impact of Search


 


As shown in the preceding table, roughly half of all enterprise-level searches benefit from one of the search admin capabilities. Employees who receive such benefits average a one-minute faster search completion time than those whose searches don’t use those capabilities. Across 1.2 million monthly enterprise-level searches at Microsoft, that time savings amounts to more than 8,000 hours a month of direct employee-productivity benefit.


 


We achieve these results with an admin team of part-time individuals, investing a total of <300 hours per month doing direct search administration, responding to user requests to help find individual items, and maintaining a self-help site which advises employees on where and how to search best. We also have a larger improvement program striving to improve information discoverability across the company.


 


So 5 years into our improvement efforts, we have significantly improved user satisfaction, can now measure the productivity impact search is having, and built numerous partnerships across the company that are expected to continue yielding improvements in the years to come.


 


Lessons from this work is actively improving search has significant payback. The first step is to actively administer search, doing whatever helps the most popular searches to deliver the right results.

Why Microsoft 365 is teaming up with OREO THINS to give you a break

Why Microsoft 365 is teaming up with OREO THINS to give you a break

This article is contributed. See the original author and article here.

Breaks during the workday are essential to our well-being. That’s why Microsoft 365 and OREO THINS are teaming up to create the THINVITE, a 15-minute snack break delivered straight to your calendar.

The post Why Microsoft 365 is teaming up with OREO THINS to give you a break appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure PostgreSQL Flexible Server has three exciting new backup and restore enhancements.

Azure PostgreSQL Flexible Server has three exciting new backup and restore enhancements.

This article is contributed. See the original author and article here.

fast-restore-teaser-small.png

 

Overview

 

Backup and restore are key pillars for business continuity and disaster recovery offerings for Azure Database for PostgreSQL Flexible Server. We’re excited to announce new features including Fast Restore, Geo Restore and Custom Restore Points to allow you more fine-grained control on your DR plan to achieve the RPO^ and RTO^^ objectives. In this post we’ll share an overview of each of these new features. Let’s get started.

 

1. Fast Restore

 

Point-in-time restore (PITR) is critical for disaster recovery by allowing recovery from accidental database deletion and data corruption scenarios. Today, PostgreSQL Flexible server performs automatic snapshot backups and allows restoring to latest point or a custom restore point. The estimated time to recover is heavily dependent on the size of transactions logs (WAL) that need to be replayed at the time of recovery. Without having much visibility into the last full backup time, it was never easy to predict the amount of time it takes to restore.

Fast-Backup-Restore.png

 

 

Many enterprises have use cases like testing, development, and data verifications where they don’t always require the latest data but need the ability to spin up a server quickly to ensure business continuity, we are glad to announce that Azure Database for PostgreSQL – Flexible Server now supports the Fast Restore feature to address these use cases. This lists all the available backups that you can choose to restore. Restore with provisions a new server and restores the backup from the snapshot and as no transaction log recovery is involved, the restores are fast and predictable. 


For more details about Fast Restore, refer the how-to-guide.

 

2. Geo Backups and Restore

 

Organizations around the world, such as government agencies, financial institutions, and healthcare providers, are looking for ways to protect their valuable data from regional failures including natural disasters. Azure Database for PostgreSQL Flexible Server already provides high availability (HA) using same zone and cross zone redundancy options. However, it cannot protect from all possible data loss scenarios such as a malicious actor, or logical corruption of a database.

geo-restore-choose-checkbox.png

 

 

For added disaster recovery capability, Flexible server now offers Geo Backups and Restore. This feature allows you configure your Azure postgres database to replicate snapshots and transaction logs to a paired region asynchronously through storage replication. Geo redundant backups can be restored to the server in paired region.


For more information about performing a geo-restore, refer the how-to guide.

3. Backups and Restore blade

 

We have heard your feedback for having better visibility into the backup and added a dedicated Backup and Restore blade in the Azure portal. This blade lists the backups available within the server’s retention period, effectively providing customers with single pane view for managing a server’s backups and consequent restores.

Fast-Backup-List.png

 

 

Customers can use this for the following:

  1. View the completion timestamps for all available full backups within the server’s retention period
  2. Perform restore operations using these full backups.

The list of available backups includes all full automated backups within the retention period, a timestamp showing the successful completion, a timestamp indicating how long a backup will be retained, and a restore action.

 

Conclusion

 

In this post, we shared some key backup and restore enhancements to provide disaster recovery within Azure database for PostgreSQL Flexible Server. Geo-backups are ideal if you need a cost-effective cross-Region DR capability that helps save on compute costs. Fast restore allows to have more predictable restore time. And backup restore blade exposing the history of full backups.

If you have any feedback for us or questions, drop us an email @AskAzureDBforPostgreSQL.

With these improvements, we continue to innovate the service offering and backup and restore capabilities of Azure Database for PostgreSQL Flexible Server.

^ RTO is Recovery Time Objective and is a measure of how quickly after an outage an application must be available again. 
^^ RPO is Recovery Point Objective, refers to how much data loss an application can tolerate.

Acknowledgements

Special thanks to Kanchan Bharati for co-authoring this post.

Lesson Learned #241: Lessons Learned from the Azure Database Support Blog Series with Anna Hoffman

This article is contributed. See the original author and article here.

We had the great honor of being able to talk about the lessons learned from the Azure Database Support Blog Series in Data Exposed about the work done for all Azure SQL Database Support Team. 


 


A huge thanks to Anna Hoffman and all her team from for giving us this opportunity. 


 



In this episode of Data Exposed, Anna Hoffman and myself show Azure Database Support blog series where you can find the most common issues faced by Azure SQL customers and how Microsoft Support Engineers fixed them.


 


 


 


Enjoy!