Making Search Better Within Microsoft

Making Search Better Within Microsoft

This article is contributed. See the original author and article here.

The Challenge


Five years ago employee satisfaction with finding information within the company was very low. it was the lowest rated it service among all those we surveyed about. Related surveys done by other teams supported this, for instance that our software engineers “finding information” as one of the most wasteful frustrating activities in their job, costing the company thousands of man years of productivity.


 


A project team was formed to improve this. In the years since we have pursued:



  • Improving search result relevance

  • Improving search content completeness

  • Address content quality issues


 


The Microsoft Search Environment


Microsoft has >300,000 employees working around the globe, and collectively, our employees use or access many petabytes of content as they move through their workday. within our employee base, there are many different personas who have widely varying search interests and use hundreds of content sources. Those content sources can be file shares, Microsoft sharepoint sites, documents and other files, and internal websites. our employees also frequently access external websites, such as hr partners’ websites.


 


BillBaer_0-1666112751951.png


 


 


We began with user satisfaction survey net score at 87 (scale of 1-200, with 200 being perfect). We have reached satisfaction of 117. Our goal is 130+.


 


What We’ve Done


Core to our progress has been:



  1. Understanding the needs of the different personas around the company. At Microsoft, personas are commonly clustered based on three factors: their organization within the company, their profession, and their geographic location. For example, a Microsoft seller working in Latin America has different search interests than an engineer working in China.

    1. Has resulted in targeting bookmarks to certain security groups.

    2. Has led to outreach to certain groups and connecting with efforts they had underway to build a custom search portal or improve content discoverability.




 



  1. Understanding typical search behavior. For instance, the diagram below shows that a relatively small number of search terms account for a large portion of the search activity.


BillBaer_1-1666112751960.png




    1. We ensure bookmarks exist for most of the high frequency searches.

    2. We look for commonalities in low frequency searches for potential content to connect in.



 



  1. Improving content quality. This has ranged from deleting old content to educating content owners on most effective ways to adding metadata to their content so it ranks better in search results. As part of our partnership with this community, we provide reporting on measurable aspects of content quality. We are in early stages of pursuing quality improvement, with much to do in building a community across the company, measuring, and enabling metadata.


BillBaer_2-1666112751961.png


 


 BillBaer_3-1666112751963.png


 




    1. For those site owners actively using this reporting, we have seen a decrease of up to 70% in content with no recent updates.




  1. Utilizing improvements delivered in product, from improved relevance ranking to UX options like custom filters.

    1. We have seen steady improvement in result ranking.

    2. We also take advantage of custom filters and custom result KQL.

    3. We use Viva Topics. Topics now receive the most clicks after Bookmarks.





  1. Making our search coverage more complete. Whether it’s via bookmarks or connectors, there are many ways of making the search experience feel like it covers the entire company.

    1. We currently have 7 connections, one of which is custom built and brings in 10 different content sources. This content is clicked on in 5% of searches on our corporate SharePoint portal.

    2. About half of our bookmarks (~600) point to URLs outside of the corporate boundary, such as third-party hosted services.





  1. Analytics. Using SharePoint extensions, we capture all search terms and click actions on our corporate portal’s search page. We’ve used these extensively in deciding what actions to take. The sample below is a report on bookmarks and their usage. This chart alone enabled us to remove 30% of our bookmarks due to lack of use.


BillBaer_4-1666112751965.png


 


 


In analyzing the click activity on our corporate portal, the most impactful elements are:






















Bookmarks



Are clicked on in 45% of all searches and significantly shortens the duration of a search session.


We currently have ~1200 bookmarks making for quick discovery of the most commonly searched for content and tools around the company.



Topics



Are clicked on in 5-7% of all searches.



Connectors



Are clicked on in 4-5% of all searches.



Metadata



Good metadata typically moves an item from the bottom of the first page to the top half and from page 2 or later onto the bottom of page 1.



 


Additional details will be published in later blog posts. If of interest, details as to exactly what Microsoft search admin does in its regular administrative activities are described here.


 


Business Impact of Search


 


As shown in the preceding table, roughly half of all enterprise-level searches benefit from one of the search admin capabilities. Employees who receive such benefits average a one-minute faster search completion time than those whose searches don’t use those capabilities. Across 1.2 million monthly enterprise-level searches at Microsoft, that time savings amounts to more than 8,000 hours a month of direct employee-productivity benefit.


 


We achieve these results with an admin team of part-time individuals, investing a total of <300 hours per month doing direct search administration, responding to user requests to help find individual items, and maintaining a self-help site which advises employees on where and how to search best. We also have a larger improvement program striving to improve information discoverability across the company.


 


So 5 years into our improvement efforts, we have significantly improved user satisfaction, can now measure the productivity impact search is having, and built numerous partnerships across the company that are expected to continue yielding improvements in the years to come.


 


Lessons from this work is actively improving search has significant payback. The first step is to actively administer search, doing whatever helps the most popular searches to deliver the right results.

Duplicate lead detection increases sellers’ productivity

Duplicate lead detection increases sellers’ productivity

This article is contributed. See the original author and article here.

To be effective, your sales team has to trust that the leads they’re getting are of good quality and that someone else isn’t working on them. If your sellers are calling leads that are assigned to another salespersonor that aren’t realthey’re wasting time. Better data hygiene is the answer, of course. But who has time to manually weed out duplicate leads? Certainly not your sales team. Luckily, they don’t have to. Duplicate lead detection in Microsoft Dynamics 365 Sales automatically identifies potential duplicates and makes merging or deleting them as easy as clicking a button.

Duplicate lead detection is available for all Dynamics 365 Sales Enterprise and Sales Premium customers. To get started, a sales admin must enable duplicate detection in the Sales Hub app settings.

AI-based duplicate lead detection improves data hygiene and sales productivity

Dynamics 365 Sales uses AI and fuzzy matching algorithms to detect duplicate leads. By “fuzzy,” we mean that not only records that have exactly matching field values, but also records that have approximately matching field values, are identified as possible duplicates.

For example, Philip, Phillip, Phil, and Filip are all variations of the same name. Searching for an exact match would miss the indication that they’re the same lead with misspelled names. But with fuzzy logic, the similarity in names flags the records as possible duplicates.

Duplicate records are identified in real-time, based on the following criteria:

  • Same email address
  • Same business phone number
  • Similar name and company name
  • Similar name and same email domain

The first two conditions look for an exact match. The last two conditions use fuzzy matching algorithms.

Empower sellers to resolve duplicate leads with just a click

An intuitive UI makes it easy for sellers to review potential duplicates and decide what to do with them.

  • For easy discoverability, a notification banner appears in both the main lead form and the leads grid view.
  • The field values that triggered the identification are highly visible.
  • To simplify the view, sellers can hide the fields that contain similar values.
  • With one click, sellers can easily fill empty fields in the primary record with data from the duplicate, or even change which version of the record is the primary.

graphical user interface, text

Sellers can delete duplicate records, mark them as not duplicate by “detaching” them from the primary record, or merge them. Sellers can merge up to four records into one primary record at the same time.

This embed requires accepting cookies from the embed’s site to view the embed. Activate the link to accept cookies and view the embedded content.

This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

As with all data in Dynamics 365, user permissions apply to duplicate lead detection. Sellers can see only records their account permissions allow them to.

Even more duplicate detection is on the way

We’re not done with duplicate detection yet. Here’s what you can expect in coming release waves:

  • Proactive detection of inauthentic email addresses
  • Detection and management of duplicate contacts, similar to duplicate lead detection
  • Detection of duplicate leads based on fields you select

Next steps

Increasing your sales team’s productivity could be as simple as eliminating duplicates from your lead databaseand Dynamics 365 Sales makes it easy.

To start taking advantage of duplicate lead detection, read the documentation and watch a brief video overview:

Not a Dynamics 365 Sales customer yet? Take a guided tour and sign up for a free trial at Dynamics 365 Sales overview.

The post Duplicate lead detection increases sellers’ productivity appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

CISA Releases Two Industrial Control Systems Advisories

This article is contributed. See the original author and article here.

CISA released two Industrial Control Systems (ICS) advisories on October 18, 2022. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations:

Why Microsoft 365 is teaming up with OREO THINS to give you a break

Why Microsoft 365 is teaming up with OREO THINS to give you a break

This article is contributed. See the original author and article here.

Breaks during the workday are essential to our well-being. That’s why Microsoft 365 and OREO THINS are teaming up to create the THINVITE, a 15-minute snack break delivered straight to your calendar.

The post Why Microsoft 365 is teaming up with OREO THINS to give you a break appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Azure PostgreSQL Flexible Server has three exciting new backup and restore enhancements.

Azure PostgreSQL Flexible Server has three exciting new backup and restore enhancements.

This article is contributed. See the original author and article here.

fast-restore-teaser-small.png

 

Overview

 

Backup and restore are key pillars for business continuity and disaster recovery offerings for Azure Database for PostgreSQL Flexible Server. We’re excited to announce new features including Fast Restore, Geo Restore and Custom Restore Points to allow you more fine-grained control on your DR plan to achieve the RPO^ and RTO^^ objectives. In this post we’ll share an overview of each of these new features. Let’s get started.

 

1. Fast Restore

 

Point-in-time restore (PITR) is critical for disaster recovery by allowing recovery from accidental database deletion and data corruption scenarios. Today, PostgreSQL Flexible server performs automatic snapshot backups and allows restoring to latest point or a custom restore point. The estimated time to recover is heavily dependent on the size of transactions logs (WAL) that need to be replayed at the time of recovery. Without having much visibility into the last full backup time, it was never easy to predict the amount of time it takes to restore.

Fast-Backup-Restore.png

 

 

Many enterprises have use cases like testing, development, and data verifications where they don’t always require the latest data but need the ability to spin up a server quickly to ensure business continuity, we are glad to announce that Azure Database for PostgreSQL – Flexible Server now supports the Fast Restore feature to address these use cases. This lists all the available backups that you can choose to restore. Restore with provisions a new server and restores the backup from the snapshot and as no transaction log recovery is involved, the restores are fast and predictable. 


For more details about Fast Restore, refer the how-to-guide.

 

2. Geo Backups and Restore

 

Organizations around the world, such as government agencies, financial institutions, and healthcare providers, are looking for ways to protect their valuable data from regional failures including natural disasters. Azure Database for PostgreSQL Flexible Server already provides high availability (HA) using same zone and cross zone redundancy options. However, it cannot protect from all possible data loss scenarios such as a malicious actor, or logical corruption of a database.

geo-restore-choose-checkbox.png

 

 

For added disaster recovery capability, Flexible server now offers Geo Backups and Restore. This feature allows you configure your Azure postgres database to replicate snapshots and transaction logs to a paired region asynchronously through storage replication. Geo redundant backups can be restored to the server in paired region.


For more information about performing a geo-restore, refer the how-to guide.

3. Backups and Restore blade

 

We have heard your feedback for having better visibility into the backup and added a dedicated Backup and Restore blade in the Azure portal. This blade lists the backups available within the server’s retention period, effectively providing customers with single pane view for managing a server’s backups and consequent restores.

Fast-Backup-List.png

 

 

Customers can use this for the following:

  1. View the completion timestamps for all available full backups within the server’s retention period
  2. Perform restore operations using these full backups.

The list of available backups includes all full automated backups within the retention period, a timestamp showing the successful completion, a timestamp indicating how long a backup will be retained, and a restore action.

 

Conclusion

 

In this post, we shared some key backup and restore enhancements to provide disaster recovery within Azure database for PostgreSQL Flexible Server. Geo-backups are ideal if you need a cost-effective cross-Region DR capability that helps save on compute costs. Fast restore allows to have more predictable restore time. And backup restore blade exposing the history of full backups.

If you have any feedback for us or questions, drop us an email @AskAzureDBforPostgreSQL.

With these improvements, we continue to innovate the service offering and backup and restore capabilities of Azure Database for PostgreSQL Flexible Server.

^ RTO is Recovery Time Objective and is a measure of how quickly after an outage an application must be available again. 
^^ RPO is Recovery Point Objective, refers to how much data loss an application can tolerate.

Acknowledgements

Special thanks to Kanchan Bharati for co-authoring this post.