From new apps in Microsoft Teams meetings to Endpoint DLP—here’s what’s new to Microsoft 365 in November

From new apps in Microsoft Teams meetings to Endpoint DLP—here’s what’s new to Microsoft 365 in November

This article is contributed. See the original author and article here.

Placeholder

From new apps in Microsoft Teams meetings to Endpoint DLP—here’s what’s new to Microsoft 365 in November

As we enter the holiday season, it’s been inspiring to see organizations condense years of transformation into just a few short months. Many of you are adopting new digital technologies, transforming your business processes, and fundamentally rethinking how work will get done going forward. Throughout all of it, we’ve been focused on studying how work is changing and listening to you, our customers, so we can prioritize features and capabilities that will help you adapt to these changes and improve your workflows.

This month, we’re thrilled to announce that many of your top requested features—like background noise suppression in meetings—are coming to Teams and other Microsoft 365 services. Read on for details on those, plus the highly anticipated Microsoft Endpoint Data Loss Prevention and other new features across Microsoft 365 to help make it easier to pick up where you left off, keep track of your notes and tasks more easily, sketch out Amazon Web Services (AWS) application architectures in Visio for the web, and more.

New apps, noise suppression, and more for Teams

This month, we’re announcing the general availability of Teams apps for meetings, expanded Forms integration, and new capabilities that make it easier to build apps and bots right within Teams.

Enrich your meeting experience with Teams apps—Typically, collaborating with an app during a meeting required someone sharing their screen while they updated tasks, set reminders, managed requests, and more. This month, we’re excited to announce that you can now bring the capabilities of many of these apps directly into the meeting experience for everyone to interact with, making the time your team spends together more effective and collaborative before, during, and after your meetings. Checkout the new Teams apps for meetings now available in the Teams app store, including Asana, HireVue, Monday.com, Slido, and Teamflect (with more on the way), as well as familiar apps built by Microsoft such as Forms. If you’re a partner or developer, learn more about creating Teams apps for meetings in our documentation.

List of new Teams apps for meetings available in November.

Quickly gather feedback in Teams meetings with polls—Polls are a great way to turn passive listeners into active participants. Forms’ integration with Microsoft Teams now brings the power of polls to meetings, helping you conduct more engaging, informative, and productive meetings. Meeting presenters can prepare polls in advance and launch the polls during meetings that attendees can easily view and answer. The new Forms polls for meetings is currently rolling out and will reach you soon. You can get started by updating your Teams app and adding the Forms app to your meeting tabs.


Microsoft Forms integration with Microsoft Teams now brings the power of polls to meetings, here is a sample poll screen.

Remove unwelcome background noise in Teams meetings—Earlier this year, we released the ability to minimize distracting background noise in videos on Microsoft Stream. We are excited to share that we’re bringing this technology to Teams meetings. This real-time noise suppression will help remove unwelcome background noise during your meetings, making easier to hear speakers in loud and distracting environments. Noise suppression is rolling out to all users now.

Automate routine tasks without leaving Teams—Also new this month is the Power Automate app for Teams. The app provides a lightweight designer experience and a number of templates to help you quickly get started building workflows right within Teams. This new app makes it even quicker and easier to automate routine tasks within Teams. To get started, make sure you’re running the latest version of Teams.

Select from Top Picks within Power Automate to quickly automate routine tasks in Teams like creating a flow.

Easily build and deploy apps and intelligent chat bots in Teams—In September, we announced a new Power Apps and a new Power Virtual Agents app for Teams, running on a built-in low code data platform, now called Microsoft Dataverse for Teams— making it easy for creators to quickly build and deploy apps and bots for Teams—now those apps are generally available. Teams users can easily build applications and bots directly within Teams without needing to deal with connecting to storage, managing integrations, or switching applications. Dataverse for Teams makes the back-end tech logistics of creating and deploying business process solutions in Teams easier than ever. Built-in security and governance features provide seamless control of access to apps, bots, and flows, as well as their underlying data.

Here is an example of how to quickly build a HR bot within Microsoft Dataverse.

Find and share information more quickly

New capabilities help make it easier to find notes while working across Microsoft 365, capture and share content, and pick up where you left off.

Easily reference your notes when working on a OneNote page—Last month, we announced the OneNote feed for Outlook on the web, which conveniently combines your notes across Sticky Notes, recent OneNote pages, and even Samsung Notes so you can easily reference them while composing your mail. This makes it easy to reference your notes when working in a OneNote page and capture any new thoughts you have by creating a Sticky Note in your feed without having to leave OneNote. The OneNote feed is available in OneNote, OneNote on the web, OneNote for Windows 10, Outlook on the web, and Outlook.com. To get started in OneNote, click the Open feed icon in the top right corner of the OneNote app window to display the feed pane.

This image shows how you can reference your notes in a OneNote page  and capture any new thoughts by creating a Sticky Note in your feed without leaving OneNote.

Capture and share web content and more in Microsoft Edge—This month, we released two new features to help make you more productive and find information more easily. First, Web capture in Microsoft Edge lets you easily capture and mark up web content, and then save or share it—simply drag a box to select what you want to capture even if you need to scroll. Second, we’ve added news and information from your favorite content providers in a new “My Feed” section within the enterprise new tab page. This customizable feed sits alongside your Office 365 content and is designed to keep you connected to information most relevant to your industry or your company.


To capture content on the web, just click on the pull down menu on the right gutter to select Web capture, select your content and highlight with the Draw function found at the top of your capture then click Share button found in the upper right corner.

Quickly pick up where you left off on recent files and more in OneDrive for iOS—A new home experience on the iOS mobile app for OneDrive will help you quickly pick up where you left off on recent files and easily re-discover memories from the past. Plus, you can now add a OneDrive widget to your iPhone home screen that displays your photo memories on this day across previous years. See your recent files and On This Day photos as soon as you open the OneDrive app. To get started, update your OneDrive iOS app.

This screenshot shows how you can easily access information on recent files or family memories once you install the OneDrive widget to your iPhone.

OneDrive family and group sharing now available

OneDrive family and group sharing now available—OneDrive has always made it easy to share docs, photos, videos, albums, and folders. But until now, sharing to a group of people meant typing in the names of all the people you wanted to share with. Last month, we simplified the process with one-click sharing to family and groups. Once you predefine your family or friend group, you’ll be able to share a photo, album, or important document to your group with one click. Family and group sharing are available in OneDrive for the web and included in all free and paid OneDrive consumer plans, as well in Microsoft 365 Personal and Family plans.


This is an example of how quickly you can share with friends or family once you have set up your predefined group.

Hear Office documents read aloud to you on Android phones—As you move through your busy day, sometimes it’s easier to hear your document read aloud to you. Now you can use Read Aloud to do exactly that in Word for Android phone and in Word on the Office app for Android. New voices offer a more natural and pleasant listening experience. You can easily pause and resume Read Aloud as well as adjust the voice speed. To get started, open the Word or Office app on your Android phone, go to the Review tab, then tap Read Aloud.

Get Yammer notifications in your Teams activity feed and more—This month, we’re announcing several new and upcoming features in Yammer. Coming soon, notifications from your Yammer announcements and mentions will show up in your Microsoft Teams activity feed if you have the Communities app installed. We’re also now rolling out an updated experience in the Yammer Tab for Teams that brings modern capabilities like pinning, cover photos, live events, and more. Finally, also released this month is support for Yammer Q and A insights in Microsoft Productivity Score, an interactive guide to get the most out of Yammer, and an updated experience for topics and hashtags in Yammer.

This GIF is showing Yammer notifications inside Microsoft Teams activity feed on a mobile device.

Prevent data loss and diagram your IT solutions

New capabilities make it easier to prevent data loss on endpoints and sketch out IT solutions build on ASW.

Protect sensitive information on your endpoints—In July, we announced the public preview of Microsoft Endpoint Data Loss Prevention (DLP) which extends our data protection capabilities to devices. Now generally available, Endpoint DLP enables organizations to enforce policies that identify and prevent risky or inappropriate sharing, transfer, or use of sensitive information consistently across cloud, on-premises, and endpoints. We’re also adding new capabilities to a public preview based on feedback from customers—such as adding enforcement actions and locations-based sensitivity, a new dashboard in Microsoft 365 compliance center to manage DLP alerts, and more.

Watch this mechanics video or visit our documentation to learn more.

This image shows an example of how to create a rule with Endpoint DLP.

Sketch your Amazon Web Services (AWS) architecture in Visio for the web—Diagrams are a great way for cloud architects to visualize the design, deployment, and topology of IT solutions built on AWS. We’re excited to announce that we’ve added support to build AWS diagrams for various topologies and service interactions in Visio for the web. More than 400 shapes are available to help architects redesign existing infrastructure diagrams, conceptualize application architecture, or visualize the current state of your cloud environment and plan for the future. To help you get started easily, we’ve provided a few starter diagrams using various AWS services. Go to the Visio web app homepage and select your preferred diagram template to quickly start visualizing your AWS infrastructure.

This image shows a screen shot diagram of SAP using SIOS.

As you continue your navigation to a more sustainable, hybrid workplace, we are committed to developing technologies that help your people and teams thrive. From building low code apps right within Teams to new AI capabilities to help you manage tasks more easily, all of these updates are aimed at enabling your people to collaborate productively, securely, and safely from anywhere.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

6 Ways to be Insight-ful and Support Student Engagement

6 Ways to be Insight-ful and Support Student Engagement

This article is contributed. See the original author and article here.

One of the top questions we hear from educators is “How do I know if my students are staying engaged”? Now, with new updates to Insights in Microsoft Teams for Education, there’s an easy way to see student behavior and take the action to support their learning, even if they’re at home, in the classroom or lecture hall, or going back and forth between the two.



Here are six ways you can use Insights to help your students achieve their learning goals:
1. See engagement across multiple classes


How this helps: When you’re teaching multiple classes it can be helpful to see how classes compare to one another and where your attention may be needed most. Now with Insights you can see a quick view of inactive students, active students/day, meeting absences and missed assignments across all your classes.


 


How to do this: If you already have Microsoft Teams for Education, click here to add the app to Teams. Or while in Teams click the More Options …, search for Insights, and add. Right-click on the Insights app on the left app bar and select Pin for easy access. When you click on the Insights app on the left, your classes will automatically populate to show the engagement across your different classes.


insights1.png


 


2. Drill down to specific activity within a class
How this helps: If you’re only teaching one class, or you need to see how a specific class or student is doing, you can drill down and see that engagement for a specific class to know how your students are doing in their learning. This includes overall student activity, assignments, grades, meetings, communication and more.


 


How to do this: If you’re in the Insights app (image above), click on a class name section to see the specific classroom data. Or if you’d just like Insights within a particular class, click on Teams, choose your class, and in the General tab select the + sign to add a tab, search Insights and add.


insights2.png


 


3. Get spotlights of student behavior and individual habits


How this helps: Spotlight cards cut through the data to automatically show you trends, habits, and behaviors of students in your class. This helps to show new views of classroom activity that may not have been visible before. Whether there’s a student who turns in assignments early, students who work late in the evening, or students who show up to class late, these behaviors are highlighted to keep you informed and ready to take action.


 


How to do this: After adding the Insights tab to your class or clicking on a class name section in the left hand rail Insights app, check out the spotlight cards on Activity, Habits, Meetings and more. Click on bolded text showing a number of students to see individual names. Thumbs up to keep receiving similar information (and thumbs down to not receive that same kind of information). Click on the compass to see the filtered digital activity report for that time frame and behavior.


insights3.png


 


4. See overall student activity (or inactivity) on Teams


How this helps: When you’re teaching to a remote or hybrid class, it can be more difficult to connect with students and understand who is engaging in class and with class materials. By seeing whether students are on and using Teams for their classes and courses, you can make informed decisions to send a message to check in with the class or with specific students.


 


How to do this: If you’re in the left hand rail Insights app, click on a classes Inactive students. Or if you’re in the Insights tab within a team, from the main dashboard click on “Track student activity” to get to the Digital Activity view.


insights4.png


 


5. Drill down to see synchronous class behavior (aka Teams meeting behavior)


How this helps: If you’re not in person with your whole class, it’s helpful to gauge which students are attending class online. By seeing synchronous class or course activity, you’ll be able to better tell which students are showing up and which student may need reminders or extra help.


 


How to do this: From the Insights tab in a class, go to the Digital Activity report. Click on “All Activities” and select “Meetings”. (You can also select specific students and different time frames.) From this view, hover over the different bars to see specific student behavior. A red bar means a missed meeting, a red dot by the student’s name means they were not active during the selected timeframe.


insights5.png


 


6. Get quick access to class grades and grade distributions
How this helps: Tracking a student’s grades over time can be a useful way to check in and see if they need help. By checking the distribution of grades – either for a specific assignment or overall for the class – you can help plan future assignments and assessments. It can also be helpful to determine if different groups of students within similar grade ranges need different help or different kinds of assignments.


 


How to do this: Within the Insights tab in a class, click on “View grade trends & distributions”. From there you can filter for different views, students, timeframes and assignments.


insights6.png


 



Looking for resources and training to get started?



 


FAQs
Can I use Insights in Teams on my desktop app, web or mobile?


Insights is available for educators in Teams desktop app and web. It will be available soon on Teams mobile as well. Students’ engagement data collected from every device they use, including mobile. (Except for data on channel visits, which is only collected from desktop devices.)


 


How does student data get used in Insights?


Insights ensures security and protection of students’ sensitive information. Classroom data is only available to class team owners or approved staff members given permissions by the IT admin, and the information collected and shown meets more than 90 regulatory and industry standards, including GDPR and the Family Education Rights and Privacy Act (FERPA). Students do not have access to the Insights app, classroom data, or other student’s data. You can find more information on the Insights IT support page.


 


I use a Learning Management System (LMS) and I can’t see my LMS in Insights, why?


Insights currently shows data from the native Teams for Education experience including classes, Assignments, and other Teams activity. If you use Teams along with your LMS you’ll be able to see the Teams related engagement data in Insights.


 


We’re always looking for ways to make Insights better. Have questions, comments, or ideas? Let me know! Add your idea here, share your comment below, and find me on Twitter (@grelad).


 


Elad Graiver
Senior Program Manager, Education Insights


 

New Software Assurance Benefits for SQL Server on Azure Virtual Machines | Data Exposed

This article is contributed. See the original author and article here.

In this episode with Amit Banerjee, you will review new Software Assurance core benefits for high availability and disaster recovery can make running SQL Server on Azure virtual machine cheaper than before. You will also see how all SQL Server releases can benefit from the FREE passive replica cores and help bring down the cost of running SQL Server on an Azure virtual machine.

Neural Text-to-Speech previews five new languages with innovative models in the low-resource setting

Neural Text-to-Speech previews five new languages with innovative models in the low-resource setting

This article is contributed. See the original author and article here.

This post is co-authored with Xianghao Tang, Lihui Wang, Jun-Wei Gan, Gang Wang,  Garfield He, Xu Tan and Sheng Zhao


 


Neural Text-to-Speech (Neural TTS), part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. For example, the BBC, Progressive and Motorola Solutions are using Azure Neural TTS to develop conversational interfaces for their voice assistants in English speaking locales. Swisscom and Poste Italiane are adopting neural voices in French, German and Italian to interact with their customers in the European market. Hongdandan, a non-profit organization, is using neural voices in Chinese to make their online books audible for the blind people in China.


 


By September 2020, we extended Neural TTS to support 49 languages/locales with 68 voices. At the same time, we continue to receive customer requests for more voice choices and more language support globally.


 


Today, we are excited to announce that Azure Neural TTS has extended its global support to five new languages: Maltese, Lithuanian, Estonian, Irish and Latvian, in public preview. At the same time, Neural TTS Container is generally available for customers who want to deploy neural voice models on-prem for specific security requirements.   


 


Neural TTS previews 5 new languages


 


Five new voices and languages are introduced to the Neural TTS portfolio. They are: Grace in Maltese (Malta), Ona in Lithuanian (Lithuania), Anu in Estonian (Estonia), Orla in Irish (Ireland) and Everita in Latvian (Latvia). These voices are available in public preview in three Azure regions: EastUS, SouthEastAsia and WestEurope.


 


Hear samples of these voices, or try them with your own text in our demo.


 










































Locale



Language



Voice name



Audio sample



mt-MT



Maltese (Malta)



“mt-MT-GraceNeural”



Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.



 



lt-LT



Lithuanian (Lithuania)



“lt-LT-OnaNeural”



Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.



 



et-EE



Estonian (Estonia)



“et-EE-AnuNeural”



Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.



 



ga-IE



Irish (Ireland)



“ga-IE-OrlaNeural”



Tá an scoil sa mbaile ar oscailt arís inniu.



 



lv-LV



Latvian (Latvia)



“lv-LV-EveritaNeural”



Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.



 



 


With these updates, Azure TTS service now supports 54 languages/locales with 78 neural voices and 77 standard voices available.  


 


Behind the scenes: 10X faster voice building with the low resource setting.


 


The creation of a TTS voice model normally requires a large volume of training data, especially for extending to a new language, where sophisticated language-specific engineering is required. In this section, we introduce “LR-UNI-TTS”, a new Neural TTS production pipeline to create TTS languages where training data is limited, i.e., ‘low-resourced’. With this innovation, we are able to improve the Neural TTS locale development with 10x agility and support the five new languages quickly.  


 


High resource vs. low resource


 


Traditionally, it can easily take more than 10 months to extend TTS service to support a new language due to the extensive language-specific engineering required. This includes collecting tens of hours of language-specific training data, and creating hand-crafted components like text analysis etc.. In many cases, one major challenge for supporting a new language is that such large volume of data is unavailable or hard to find, causing a language ‘low-resourced’ for TTS model building.  To handle the challenge, Microsoft researchers have proposed an innovative approach, called LRSpeech, to handle the extremely low-resourced TTS development. It has been proved that LRSpeech has the capability to build good quality TTS in the low-resource setting, using multilingual pre-training, knowledge distillation, and importantly the dual transformation between text-to-speech (TTS) and speech recognition (SR).


 


How LR-UNI-TTS works


 


Built on top of LRSpeech and the multi-lingual multi-speaker transformer TTS model (called UNI-TTS), we have designed the offline model training pipeline and the online inference pipeline for the low-resource TTS.  Three key innovations contribute to the significant agility gains with this approach.


 


First, by leveraging the parallel speech data (the pairing speech audios and the transcript) collected during the speech recognition development, the LR-UNI-TTS training pipeline greatly reduces the data requirements for refining the base model in the new language. Previously, the high-quality multi-speaker parallel data has been critical in extending TTS to support a new language. The TTS speech data is more difficult to collect as it requires the data to be clean, the speaker carefully selected, and the recording process well controlled to ensure the high audio quality.


 


Second, by applying the cross-lingual speaker transfer technology with the UNI-TTS pipeline, we are able to leverage the existing high-quality data in a different language to produce a new voice in the target language.  This saves the effort to find a new professional speaker for the new languages. Traditionally, the high-quality parallel speech data in the target language is required, which easily takes months for the voice design, voice talent selection, and recording.


 


Lastly, the LR-UNI-TTS approach uses characters instead of phonemes as the input feature to the models, while the high-resource TTS pipeline is usually composed of a multi-step text analysis module that turns text into phonemes, costing long time to build.


 


Below figure describes the offline training pipeline for the low-resource TTS voice model.


 

Figure 1. The offline training pipeline for the low-resource TTS voice model.Figure 1. The offline training pipeline for the low-resource TTS voice model.


 


In specific, at the offline training stage, we have leveraged a few hundred hours of the speech recognition data to further refine the UNI-TTS model. It can help the base model to learn more prosody and pronunciation patterns for the new locales. The speech recognition data is usually collected in daily environments using PC or mobile devices, unlike the TTS data which is normally collected in the professional recording studios. Although the SR data can be much lower-quality than the TTS data, we have found LR-UNI-TTS can benefit from such data effectively.


 


With this approach, the high-quality parallel data in the new language which is usually required for the TTS voice training becomes optional. If such high-quality parallel data is available, it can be used as the target voice in the new language.  If no high-quality parallel data is available, we can also choose a suitable speaker from an existing but different language and transfer it into the new language through the cross-lingual speaker transfer-learning capability of UNI-TTS.


 


In the below chart, we describe the flow of the runtime.


Figure 2: The online inference pipeline for the low-resource TTS voice model.Figure 2: The online inference pipeline for the low-resource TTS voice model.


 


 

At the runtime, a lightweight text analysis is designed to preprocess the text input with sentence separation and text normalization. Compared to the text analysis component of the high-resource language pipelines, this module is greatly simplified. For instance, it does not include the pronunciation lexicon or letter-to-sound rules which are used in high-resource languages. The normalized text characters are generated by the lightweight text analysis component. During this process, we also leverage the text normalization rules from the speech recognition development, which saves the overall cost a lot.


 


The other components are similar to the high-resource language pipelines. For example, the neural acoustic model uses the FastSpeech model to convert the character input into mel-spectrogram.


 


Finally, the neural vocoder HiFiNet is used to convert the mel-spectrogram into audio output.


Overall, using LR-UNI-TTS,  a TTS model in a new language can be built in about one month, which is 10x faster than the traditional approaches.


 


In the next section, we share the quality measurement results for the voices built with LR-UNI-TTS.


 


Quality assessments


 


Similar to other TTS voices, the quality of the low-resource voices created in the new languages are measured using the Mean Opinion Score (MOS) tests and intelligibility tests. MOS is a widely recognized scoring method for speech naturalness evaluation. With MOS studies, participants rate speech characteristics such as sound quality, pronunciation, speaking rate, and articulation on a 5-point scale, and an average score is calculated for the report. Intelligibility test is a method to measure how intelligible a TTS voice is.  With intelligibility tests, judges are asked to listen to a set of TTS samples and mark out the unintelligible words to them.  Intelligibility rate is calculated using the percentage of the correctly intelligible words among the total number of words tested (i.e., the number of intelligible words/the total number of words tested * 100%).  Normally a usable TTS engine needs to reach a score of > 98% for intelligibility.


 


Below table summarizes the MOS score and the intelligibility score of the five new languages created using LR-UNI-TTS  .


 










































Locale



Language (Region)



Average MOS



Intelligibility



mt-MT



Maltese (Malta)



3.59*



98.40%



lt-LT



Lithuanian (Lithuania)



4.35



99.25%



et-EE



Estonian (Estonia)



4.52



98.73%



ga-IE



Irish (Ireland)



4.62



99.43%



lv-LV



Latvian (Latvia)



4.51



99.13%



* Note: MOS scores are subjective and not directly comparable across languages. The MOS of the mt-MT voice is relatively lower but reasonable in this case considering that the human recordings used as the training data for this voice also gots a lower MOS. 


 


As shown in the table, the voices created with the low resources available are highly intelligible and have achieved high or reasonable MOS scores among the native speakers.


 


It’s worth pointing out that due to the nature of the lightweight text analysis module for the runtime, the phoneme-based SSML tuning capabilities are not supported for the low-resource voice models, for example, the ‘phoneme’ and the ‘lexicon’ elements.


 


Coming next: extending Neural TTS to even more locales


 


LR-UNI-TTS  has paved the way for us to extend Neural TTS to more languages for the global users more quickly. Most excitingly, LR-UNI-TTS can potentially be applied to preserve the languages that are disappearing in the world today, as pointed out in the guiding principles of XYZ-code.


 


With the five new languages released in public preview, we welcome user feedback as we continue to improve the voice quality.  We are also interested to partner with passionate people and organizations to create TTS for more languages.  Contact us (mstts[at]microsoft.com) for more details.


 


What’s more: Neural TTS Container GA


 


Together with the preview of these five new languages, we are happy to share that the Neural TTS Container is now GA. With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements.  Learn more about how to install Neural TTS Container  and visit the Frequently Asked Questions on Azure Cognitive Services Containers.    


 


Get started 


 


With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers, supporting more flexible deployment. Azure Text-to-Speech service provides more than 150 voices in over 50 languages for developers all over the world.


 


For more information:


Recording – How MCS Uses M365 Insights to Promote Well-being, Resilience and Transformation

Recording – How MCS Uses M365 Insights to Promote Well-being, Resilience and Transformation

This article is contributed. See the original author and article here.

samnpete.PNG


 


Do you want to promote well-being, create resilience and accelerate transformation? Microsoft Consulting Services (MCS) uses Microsoft 365 insights to drive actions across your organization to make that happen! On Wednesday, November 18th at 12 Noon Eastern Time, we brought in two of our colleagues from MCS to talk through their approach: John Allen, National Solutions Sales Director for Health and Life Sciences, and Jesse Howard, Director for Cloud Productivity.


 


Agenda:



  • Disruption, transformation and your people

  • Insights to Actions: a virtuous cycle

  • Modern Work insights: WPA, Graph, and beyond

  • Building an engine for change


 


Recording:



 


Resources:



 


Presenters:


jesse_headshot2020 (1).jpg


Jesse Howard, Microsoft Consulting Services Director for Cloud Productivity


 


0.jpg


John Allen, National Solutions Sales Director for Health and Life Sciences


 


Producers:


IMG_8280 (2).jpg


Sam Brown, Microsoft Teams Technical Specialist


 


pete.jpg


Pete Anello, Senior Microsoft Teams Technical Specialist


 


Thanks for joining and let us know how else we can help!