Microsoft 365 & SharePoint Community (PnP) – August 2020 update

Microsoft 365 & SharePoint Community (PnP) – August 2020 update

august-2020-monthy-summary.jpg

 

Microsoft 365 & SharePoint Ecosystem (PnP) August 2020 update is out with a summary of the latest guidance, samples, and solutions from Microsoft or from the community for the community. This article is a summary of all the different areas and topics around the community work we do around Microsoft 365 and SharePoint ecosystem during the past month. Thank you for being part of this success. Sharing is caring!

 

 

Got feedback, suggestions or ideas? – don’t hesitate to contact.

Enterprise-Scale for Azure landing zones

Enterprise-Scale for Azure landing zones

With this article I would like to start a series related to a new approach to build Azure landing zones, called Enterprise-Scale. The first article services as a in introduction to the topic.

 

What is an Azure landing zone?

An Azure landing zone is an Azure subscription that accounts for scale, security, governance, networking, and identity. An Azure landing zone enables application migrations and cloud native application development by consider all platform resources that are required, but does not differentiate between IaaS or PaaS-based applications.

Or in simple words: the purpose of an Azure landing zone is to ensure the required “plumbing” is already in place, providing greater agility and compliance with security and governance requirements when applications and workloads are deployed on Azure.

 

What is Enterprise-Scale?

Enterprise-Scale is part of the Cloud Adoption Framework (CAF), or more specifically the Ready phase of CAF. The Enterprise-Scale architecture provides prescriptive architecture guidance coupled with Azure best practices, and it follows design principles across the critical design areas for an organization’s Azure environment and landing zones. It is based on the following important 5 design principles:

  • Subscription democratization
  • Policy-driven governance
  • Single control and management plane
  • Application-centric and archetype neutral
  • Align Azure-native design and roadmap

Furthermore, Enterprise-Scale within CAF lists many design guidelines, design considerations and recommendations. These 8 design areas can help you address the mismatch between and on-premises data center and cloud-design infrastructure. It is not required that you implement all the design recommendations, as long as the chosen cloud-design infrastructure is aligned with the 5 design principles.

The 8 design areas are as follows:

  • Enterprise Agreement (EA) enrollment and Azure Active Directory tenants
  • Identity and access management
  • Management group and subscription organization
  • Network topology and connectivity
  • Management and monitoring
  • Business continuity and disaster recovery
  • Security, governance, and compliance
  • Platform automation and DevOps

 

In those 8 design areas, topics covered are for example using Azure Active Directory Privileged Identity Management (PIM) for just in time access, Azure Virtual WAN for the global network, Azure Application Gateway and Web Application Firewall (WAF) to protect web applications, etc.

A high-level design of Enterprise-Scale is shown in the figure below:

High-level architecture.High-level architecture.

 

Sources

Improving the update discoverability experience

Improving the update discoverability experience

Based on your feedback, it is now easier for you to discover available Windows 10 feature updates, monthly non-security quality updates, and driver updates.

Beginning with the August 2020 security update for Windows 10, when optional updates are detected by your device, they will be displayed on a new page under Settings > Update & Security > Windows Update > View optional updates. That means you no longer need to utilize Device Manager to search for updated drivers for specific devices.

How to view optional updates in Windows 10How to view optional updates in Windows 10

How optional driver updates will appear in Windows 10, beginning with the August 2020 updateHow optional driver updates will appear in Windows 10, beginning with the August 2020 update

Windows Update will, of course, continue to automatically keep your drivers updated, but installing optional drivers may help if you are experiencing an issue.

We look forward to your feedback on this enhancement to the update experience, and to bringing you continued improvements that improve your experience with Windows 10 overall.

 

How to measure ranking system: Three setups of ground truth data labeling

Imagine that we have some non-trivial subsystem – e.g. product retrieval for user query – and need to know how good it is and decide if it is the bottleneck for the overall system. Or we have 2 competing solutions and need to choose the better one.

It is generally accepted that to make such decisions, we need to measure the subsystem; to do the measurement, in turn, we need ground truth data – in most cases, labeled by humans or crowdsourced labelers. Depending on our main goal and task specifics, one labeling setup may be better than another.

This post discusses ground truth labeling setups for ranking and helps to choose the most appropriate one for a use-case. Here by ranking we mean a task to establish an order on multiple items. It is common for many applications, such as search engines (for products/images/webpages) or recommendation systems, where ranking is done by relevance to a user query or a user profile. Ranking can be also based on universal properties of items to be ranked, e.g. ranking images by attractiveness.

 

Setups of ground truth data labeling

One can distinguish three main setups for ground truth data labeling:

  1. Absolute Gain Estimation – each item is labeled independently from each other on an established absolute scale (usually from 0 to 1); note that in case of relevance-based ranking, item includes both constituents, e.g. a pair of a user query and an image to be ranked.
  2. Relative ordering – labeler sees all items and directly introduces order on them.
  3. Side-by-side – labeler compares just 2 items at once and provides label reflecting result of such comparison; the label may be binary or multivalued. 

For example, if all we need is to compare 2 subsystems and it is relatively easy to understand which of 2 items is better, then Side-by-side is the way to go. In another case, when we try to answer how far the current ranking subsystem is from the ideal one, then Absolute gain estimation may be a better choice.

See a detailed comparison of these setups in the following table:

Setup

Absolute Gain Estimation (AGE)

Relative ordering (RO)

Side-by-Side (SBS)

Description

Having just one item (per query), return number reflecting quality of this item

Introduce an order on collection of items (per query)

Compare 2 items (per query) and decide which item is better (maybe, with scale – how much better)

Where can be used (tasks)

1. Estimate how far we are from ideal

2. Estimate priority of the task

3. Estimate ROI 

4. Estimate not only quality of ranking subsystem, but also quality of the items to be ranked (say, if all items have score around 0.1, then even perfect ranking wouldn’t improve overall quality and better items need to be added to the system)

5. Training data collection

6. Compare any number of systems

1. Training data collection

2. Compare any number of systems

1. Compare Prod vs New (to decide if ship new model or not)

2. Compare system with competitor

3. Train gains for levels (by Maximum Likelihood estimation)

How can be implemented

(See details in the section below)

1. Predefined levels (Excellent/Good/Fair/Bad)

2. Slider-like

 

 

1. Best-worst scaling

2. Direct swapping of items until needed order is achieved

 

Show two items and choose one of predefined levels (e.g. on Likert scale or on 1-100 slider)

Possible metrics

(n)DCG

MAP

MRR (binary)

Rank correlation coefficients (e.g. Kendall-tau)

Win/loss ratios (possibly, weighted): (wins-losses)/(wins+losses+ties)

McNemar’s test

Main pros

Universal

Easy to combine with other measurements

 

Most similar to actual ranking, thus best for corresponding tasks (low overhead, high speed, etc.)

Most similar to actual comparison, thus best for corresponding tasks (high sensitivity, low overhead, high speed, etc.)

Can include scale (e.g. Likert)

Main cons

Hard to define and judge – needed to describe/imagine Ideal and Worst items for each query – as a result, worse sensitivity for Training and Comparison

Can’t differentiate scenarios with 2 items: 1-0.1 vs 0.6-0.5

Can hardly support labeling of big number of items (say, greater than 10)

By design, requires 2 items to compare.

There may be hybrids of them, e.g.

  1. RO + AGE: firstly, do Relative ordering, then assign scale for all items by doing costly AGE for the best and the worst items for the query.
  2. SBS + AGE: do Side-by-side with items that have known absolute gains (e.g. 0.9, 0.5, 0.1) of another query (harder to compare, esp. medium cases)

Note that we can use different Setups for different tasks, such as use AGE for measurements, but RO for gathering training data. Simple way to do this is to assign uniform gains after ordering (1 for the top, 0 for the last, etc.), then train ranker on this. This is not ideal but can be better in case of too few levels for the data or too costly AGE labeling.

 

Implementations of Absolute Gain Estimation

There are 2 general methods to obtain non-binary ranking-like labels:

Method

Predefined levels

Slider-like

UX Implementation

Radio buttons

Slider

Number of distinct levels

2-5

10-100

Levels naming

Named, e.g. Perfect/Good/Fair/Bad

Not named

Guidelines

Clear, distinctive

Short, general, unbiased

Judges’ requirements

High

Medium

Judgements per item

2-3 per item

5-10 per item

Agreement

Alpha Krippendorff’s

Correlation coefficient

Speed of judgments

Low

Medium

Interpretability

Medium

Low

Flexibility

Medium

High

Stability (anti-variance)

Medium

Low

For which tasks to use

Complex, formalizable, well-defined, homogenous

Subjective, heterogenous

Examples

 

TREC collections

Most of scientific papers in Information retrieval; in most cases, just binary: relevant/irrelevant

Machine Translation in Bing: “we had people just score the translation on a scale, like, just a slider, really, and tell us how good the translation was. So, it was a very simple set of instructions that the evaluators got. And the reason we do that is so that we can get very consistent results and people can understand the instructions.”

Again, there may be combinations of them, e.g. firstly solve binary problem (Bad vs Not bad) by Predefined levels method, then solve quality/attractiveness problem, which usually are more subjective, by Slider-like method.

Also, slider method can be used as a preliminary step to construct Predefined levels method:

  1. Ask multiple judges to rank items and then to explain their choices.
  2. Infer levels and their definitions from these explanations (e.g. cluster them and analyze).

To sum up, it may be beneficial to firstly outline the most important questions to be answered by measurement, then to collect all task specifics – what is the type of items, how hard to define ideal item, etc. – in order to finally design the most suitable measurement strategy.

Migrate your AWS VMs to Azure with Azure Migrate

Migrating your on prem server estate to Azure is something that you’ll often hear me talk about at events, or on this blog.  And when I am talking about datacentre migrations, you’ll often hear me mention how Azure Migrate can help you on that journey, with your assessment and migrations needs.   Recently, when I was presenting my “Start your data centre Migration Journey with Azure Migrate” at a recent user group talk I mentioned that Azure Migrate can help you migrate from other locations such as AWS, or GCP, etc, which prompted some discussion on how at the end of my session.  And I wanted to share more about that. :grinning_face:

 

I’ve dealt with some companies that have already moved some resources into the cloud, but not into Azure and the companies strategy or direction has changed and they are looking to consolidate everything into Azure.  So, their migration path and tooling has to not only support their on prem resources it needs to support the migration from the other cloud provider as well.

 

Azure Migrate introduced the functionality to assess physical servers in December 2019, which was to serve an ask customers, but it also allowed those that didn’t have access to the hypervisor level of their virtual machines to make use of Azure Migrate.  And ultimately this is the functionality that allows you to use Azure Migrate to assess and migrate your virtual machines from AWS, GCP or another cloud provider into Azure.

 

In my previous blog post and video, I talked about assessing your AWS environment with Azure Migrate.  

 

The video below shows you the process of using Azure Migrate to migrate your AWS virtual machines into Azure.  You can watch the full video here or on Microsoft Channel 9.

 

 

You can find more information here: 

 

 

I hope you enjoyed the video if you have any questions feel free to leave a comment.