This article is contributed. See the original author and article here.
Our training and certification portfolio continues to evolve, and we invite you to discover the power of Microsoft technology to open new career possibilities. Here are the new learning paths and modules that we released last month on Microsoft Learn. Look for ways to build and deepen your skills, and then validate them by earning a Microsoft Certification. This month, we have a new learning path (with 12 modules) for Microsoft Power Platform solution architects. Check out our other new Microsoft Power Platform and Power Automate modules, plus a new Industry Solutions module. In addition, we’ve got new Dynamics 365 Fraud Protection, Project Operations, and Human Resources modules. Work through these and other modules at your own pace. Use free, online training on Microsoft Learn, to explore new skills to use on the job or to take your career in a new direction. If you need help figuring out which training to take, check out the Dynamics 365 learning paths page and the Microsoft Power Platform learning paths page, where you’ll find useful collections, learning paths to get you started, and popular modules. We’ve also added product-specific landing pages, listed at the end of this post.
We’re removing older, retired courses from the Dynamics Learning Portal on October 15, 2021, as a result of the significant reduction in the number of downloads of these e-learning courses. If you want to keep any of these courses for your own use, be sure to download them before that date.
The following learning paths and modules were released in April 2021.
This article is contributed. See the original author and article here.
It’s time to turn content into knowledge with the help of AI. Let the service reason over your data while you focus on curating a unique employee experience that meets your users where they are already working.
In this episode, Chris and I talk with CJ Tan (principal PM manager | Microsoft) about her role at Microsoft on the Project Cortex team. We dig into knowledge roles, deployment practices, common scenarios, and top of mind for ‘what’s next.’ We don’t think that AI is the only substitute for IA. People, metadata, and AI interact better together. CJ walks us through how it all works from pilot to broad-scale use and adoption.
Intrazone guest: CJ Tan (principal PM manager | Microsoft)
BONUS | New episode of Microsoft Mechanics – part 1 of 5 on Viva, “Introduction to Microsoft Viva, an Employee Experience Platform” with Jeremy Chapman:
And hey, we have a new logo for the show – you’ll now see it in all the podcast feeds. Our intent was to emphasize the inclusivity of Microsoft 365, promoting connectedness between people, content, and apps. Note the teal through lines. In addition, we addressed feedback to make the logo more accessible across platforms. Let us know what you think in comments below:
The Intrazone introduces a new logo, showing how it appears in a square format (left) and a rectangle format (right).
Links to important on-demand recordings and articles mentioned in this episode:
Be sure to visit our show page to hear all the episodes, access the show notes, and get bonus content. And stay connected to the SharePoint community blog where we’ll share more information per episode, guest insights, and take any questions from our listeners and SharePoint users (TheIntrazone@microsoft.com). We, too, welcome your ideas for future episodes topics and segments. Keep the discussion going in comments below; we’re hear to listen and grow.
The SharePoint teams want you to unleash your magic, creativity, and productivity – and be compliant about it all. And we will do this, together, one compliance score point at a time.
Left to right [The Intrazone co-hosts]: Chris McNulty, director (SharePoint/Viva – Microsoft) and Mark Kashman, senior product manager (SharePoint – Microsoft).
The Intrazone, a show about the Microsoft 365 intelligent intranet (aka.ms/TheIntrazone)
This article is contributed. See the original author and article here.
Google has released Chrome version 90.0.4430.212 for Windows, Mac, and Linux. This version addresses vulnerabilities that an attacker could exploit to take control of an affected system.
CISA encourages users and administrators to review the Chrome Release Note and apply the necessary updates.
This article was originally posted by the FTC. See the original article here.
It’s never too late to find love, and lots of dating sites and apps are there to help. But scammers are out to steal your heart, too…and then steal your money. This Older Americans Month, let’s talk about romance scams. These can happen when someone makes a fake profile on dating sites, apps and social media. They then message you to get a relationship going, build your trust, and connect.
Then, they hit you up for money. “Baby, I want to come see you but I’m short on funds. Can you send me $500 for a ticket?” Or, “I love you, honey. But we may not be able to talk anymore because my phone is about to get cut off. I need $300 to pay the bill…” Get the idea?
In the name of love, you send money. They come back with other lies to get still more money. Then the messages stop. You can’t reach them. They’ve taken off with a piece of your heart and big chunk of your wallet.
People reported $304 million in losses to romance scams in 2020. Here’s how you can avoid these heartless imposters:
If someone you’ve never met in person asks you for money, that’s a scam. No matter the story. Never send money or gifts to anyone you haven’t met in person — even if they send you money first.
Do a reverse image search of the person’s profile picture. See if it’s associated with another name or with details that don’t match up. Those are signs of a scam.
Talk to someone you trust about your new love interest, and pay attention if they’re concerned. Learn more by watching this video and at ftc.gov/romancescams. And if a scammer tries to charm you out of your funds, report it to the FTC.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
This article is contributed. See the original author and article here.
Introduction
Application management is the core function of maintaining existing application portfolios. While traditional approaches to application management can constrain enterprises and hamper modernization and digital transformation initiatives, the latest monitoring and automatic alerting capabilities can help increase speed and agility.
With the advent of cloud platforms, most organizations now have an application footprint that resides in the cloud. This trend is increasing over time and brings about a paradigm shift in how cloud-hosted applications must be monitored. Most cloud platforms, including Microsoft Azure, provide their own set of native tools that enable application monitoring. This article covers the relationship between application management and monitoring tools, recommendations on how to choose the right monitoring tool and DXC Technology’s automation solution for monitoring applications in Microsoft Azure.
Application management services and monitoring tools: How they play together
Application management services address ongoing support for applications, typically involving defect repair and issues that arise in a supported application. Application issues are typically reported by users or customers even before the support team realizes an issue exists, leading to poor customer satisfaction, and more often than not, a portion of these issues tends to be repetitive, which means the same procedure to resolve the issue has to be applied manually over and over again. These manual and repetitive tasks often account for up to 40 percent of the application support team’s workload.
This is where monitoring tools can play a major role. By using techniques such as real-user monitoring and synthetic monitoring, it is possible to proactively identify issues with the application. Real-user monitoring can detect issues that occur in the application during use. Synthetic monitoring enables checking whether the application is available and also enables simulating a user for specific scenarios to test whether the application is working as expected.
Troubleshooting an issue when a monitoring tool is in place means that the support team uses real data points captured by the tool, which enables it to identify the root cause of the issue — thus eliminating the need for guesswork. By adding hooks to these monitoring mechanisms, it is possible to automatically detect when an issue or incident occurs in a specific application and also to alert relevant stakeholders when the issue occurs.
These detection and reporting capabilities mean the application support team can be made aware of the issue immediately and don’t have to wait for users or customers to report the issue. Automatic detection and alerting enables the team respond to the incident faster, reducing the mean time to restore (MTTR). Depending on the scenario, real-time notification means the application support team may be able to either fix the issue before it is found by the user or at least add an upfront warning message to the user that a specific feature is unavailable and undergoing maintenance.
Correspondingly, for a subset of the repetitive issues, it may be possible to automate a sequence of manual steps to arrive at an automatic resolution. Implementing the appropriate monitoring solution, coupled with automation capabilities, makes it possible to lower costs up to 30 percent.
Application monitoring strategies:
As more enterprises adopt cloud platforms for hosting applications, it is important to have a strategy to monitor the applications. This enables the application management team to respond quickly to issues that arise.
Cloud platform vendors such as Microsoft provide cloud-native monitoring tools to the cloud platform. In addition, vendors that used to provide tools for monitoring on-premises applications hosted have jumped onto the cloud bandwagon and now provide tools for monitoring in the cloud. As the focus of this article is on Cloud native monitoring, only the key advantages of the Cloud-native monitoring tools are covered here:
Key advantages of cloud-native monitoring tools
No installation or additional licensing requirements – one can provision and configure the tools and the monitoring of the application starts immediately.
No additional licensing requirements – the cost of the monitoring tool is charged like any other Azure resource as part of the monthly Cloud Spend.
The DXC approach: Application Service Automation and AMS
DXC Technology’s approach to monitoring and automation, known as Application Service Automation (ASA), includes a modular framework that covers the entire closed-loop automation cycle from automatic detection to correction.
This framework is both platform- and technology-agnostic and can be adapted to various tool stacks based on customer needs. The solution described next adheres to DXC’s underlying ASA framework but is based fully on cloud-native tools provided by Microsoft Azure. The next section presents how Azure-native tools can be leveraged to provide an Azure-based variant of DXC’s ASA framework.
This solution focuses on providing application monitoring and adds some custom-built automation features. It provides a high-level guideline on how the cloud-native monitoring tools provided by the Azure platform can be used in combination to provide a successful monitoring solution. Here is an introduction to the tools that would be used as part of the solution.
Azure cloud-native monitoring tools
The Azure platform brings with it monitoring and orchestration tools that have been incorporated into this solution. Depicted below is a high-level reference view of the Azure-native tools used.
Figure 1. Microsoft Azure cloud-native monitoring tools
The in-scope applications shown in the diagram represent the portfolio of applications for which cloud-native monitoring using Azure-native tools needs to be enabled. The IT Service Management (ITSM) tool shown on the right-hand side represents the customer’s ticketing tool where the incident details are captured.
Azure Monitor is a comprehensive native monitoring service provided by Azure as part of its cloud platform for monitoring, which can collect and analyze telemetry data from applications. It comprises several tools, each providing various types of support from an application-monitoring perspective. The main ones are:
Azure Monitor for VMs and Azure Monitor for Containers provide monitoring of the infrastructure components.
Azure Application Insights enable tracing of user requests as they travel through an application, and they have the ability to collect and capture telemetry that is emitted from applications. Azure Application Insights also provide features such as multistep web tests and URL ping tests, which enable synthetic monitoring capabilities.
Azure Monitor Alerts provide a mechanism to trigger an action based on the evaluation of a metric captured in Azure Application Insights and Azure Monitor.
Azure Logic Apps are a serverless compute capability available on Azure platform and used as part of the automated resolution strategy of the solution. DXC’s solution approach for monitoring applications based on Azure Application Insights is defined below.
The in-scope application is first enabled to emit telemetry by adding the specific Application Insights software development kit (SDK) — either the Java SDK or the .NET SDK — to the application. Azure Application Insights has SDKs for Java, .NET, ASP.NET Core, Node.js, Python and JavaScript at the time this article is written.
The application is recompiled and deployed onto the Azure virtual machine (VM). Only the addition of the SDK is needed, and no further intrusive code changes are necessary. Once this is done, the application starts to emit telemetry, which is captured in the Azure Application Insights.
Application Insights also support monitoring based on “codeless attach” or “auto-instrumentation,” where applications can be monitored without the need to add the SDK. This approach is still evolving, and not all scenarios are supported yet. For the latest information on this approach, refer to the Microsoft documentation on the topic.
Figure 2 – Azure Application Insights dashboard
As can be seen in Figure 2, in case the application emits exceptions, they are then captured as well under the “Failed Requests” section. The logical next step is to propagate this error to the ITSM tool and ensure that a ticket is raised. This can be done either by
using an Azure Alert and then attaching an Action Group that calls a Logic App, or
directly by using a Logic App that polls the Application Insights Logs for exceptions and then raises a ticket in the ITSM tool via the DXC CASA module (CASA is DXC’s custom built module that helps integrate the different toolsets and is short for “Controller for ASA”). Here we take the latter approach since it provides additional flexibility.
Figure 3 shows the Azure Logic App that is querying Application Insights by using the built-in Connector for Azure Application Insights. Once the exception details are retrieved from the query, the app is used to create an ITSM ticket in the ITSM ticketing tool using connectors. As an example, a ServiceNow connector is shown here.
Here, the serverless computing elements such as Azure Logic Apps are used to integrate and orchestrate error reporting to the ITSM tool and also notify relevant stakeholders. The use of serverless components such as Logic Apps for orchestration of the error detection and resolution brings several advantages. Logic Apps tend to be low-maintenance since Azure handles patches and updates — unlike a virtual machine (VM) that would itself add management overhead costs. Logic Apps are based on the concept of “low-code, no-code” development and rapidly support implementation of the required logic with low effort. Logic Apps also store Run History for each instance that has run and provide a historical visualization of each run, including the runtime values present at the time of the run.
Figure 4 – Azure Logic Apps – run history
If an error occurs in the application, this error is captured in the Azure Application Insights. By means of the orchestration built with the help of Azure Logic Apps and DXC CASA, this error is now propagated to the ITSM layer.
If we assume that this issue is a frequent and repetitive one, an evaluation is done to determine whether it has the potential to be automated. If the automation aspect is found feasible, that approach is implemented — ensuring that the issue can also be automatically fixed.
Figure 5 – Automation Resolution Using Logic Apps
The Logic Apps layer is used as both the resolution orchestrator and for implementing the actual resolution flow. The resolution orchestrator will receive the open tickets from the ITSM tool via the DXC CASA module and then delegates the resolution to the specific resolve flow. Azure Logic Apps are leveraged as mechanisms for implementing the specific automated resolution (the resolve flow) as well. The automation responsible for fixing the issue would be both scenario-specific and application-specific, and therefore more details about the specific issue are not covered as part of this article. The DXC CASA module ensures that the ticket open and closure data is propagated to the Data Lake which powers the Dashboard. This Dashboard layer provides insights into the workings of the entire closed loop automation via various graphical charts.
Once the application- and scenario-specific automation runs, the issue is fixed automatically. The automation mechanism would also close the ticket that was created once the fix has been applied, thus providing an end-to-end automation of the issue.
Key benefits
Improving resiliency by ensuring a higher uptime for the application while lowering the MTTR
Improving customer experience by detecting and reacting to issues before the customer is effected.
Leveraging automation to accelerate processes, reducing redundant manual efforts
Reducing cost, as the overall cost of a solution built using the described approach will be much lower than solutions based on any of the leading third-party tools.
Conclusion
This article covers application-monitoring strategies, cloud-native monitoring tools and third-party tools for monitoring the relationship between application management services and monitoring tools. Its recommendation with regard to monitoring tools is to find the right fit of the tool based on business criticality and the type of application. Microsoft Azure provides all the relevant building blocks required to weave together an end-to-end solution, which includes application monitoring, automated ticket creation, as well as automated resolution. For more information, visit: DXC Application Service Automation.
Vikram Srivatsa is a senior architect and part of the Worldwide Applications Service Line at DXC Technology, based in Bengaluru, India. He has vast experience in architecting enterprise applications, and his current area of focus includes creation of cloud-native solutions for the enterprise.
Ashish Thakur is a product engineer for Application Services at DXC Technology, based in Noida, India. He is knowledgeable in cloud and service delivery automation using Azure-native tools and in architecting solutions and building proofs of concept.
This article is contributed. See the original author and article here.
Tour Microsoft Viva, the new employee experience platform that connects learning, insights, resources, and communication. Viva is a unique set of curated and AI enriched experiences built on top of and integrated with the foundational services of Microsoft 365. Join Jeremy Chapman as he shares Viva’s capabilities, the underlying tech, and your core options for enabling and configuring Microsoft Viva as a team leader or admin.
Microsoft Viva’s 4 core modules:
Viva Topics— builds a knowledge system for your organization
Viva Connections— boosts employee engagement
Viva Learning— creates a central hub to discover learning content and build new skills
Viva Insights— recommends actions to help improve productivity and wellbeing
As an employee:
Get more time to focus and recharge — no matter where you’re working from.
Connect with others, stay informed and engage
Accelerate learning new skills, and balance your time at work.
At an organizational level:
Boost morale and retention and the overall success of your organization.
Foster a new culture of support for employees, so even when not physically together with colleagues, they feel connected to collective goals.
Employees can easily leverage the knowledge and connections of their work community to get things done and feel invested in their career growth.
We are Microsoft’s official video series for IT. You can watch and share valuable content and demos of current and upcoming tech from the people who build it at #Microsoft.
-Welcome to Microsoft Mechanics and our Essentials Series on the new employee experience platform, Microsoft Viva. In the next few minutes, I’ll introduce you to Viva’s capabilities, explain the underlying tech and your core options for enabling and configuring Microsoft Viva as a team leader or admin.
-You can think of Viva as a unique set of curated and AI-enriched experiences built on top of and integrated with the foundational services of Microsoft 365, which extends your existing investments. As an employee, Viva experiences are crafted to help you more easily connect with others and the information around you, stay informed and engaged, accelerate your learning of new skills and balance your time at work, giving you more time to focus and recharge no matter where you’re working from. At an organizational level, Viva can help foster a new culture of support for employees, so that even when they are not physically together with colleagues they always feel a strong connection to collective goals. They can more easily leverage the knowledge and connections of their work community to get things done and feel invested in their career growth and wellbeing, all of which can boost morale and retention and the overall success of your organization.
-Microsoft Viva experiences today are delivered across four core modules with more on the way. First, Viva Topics builds a knowledge system for your organization. With Viva Topics, you can discover knowledge connected to specific topics quickly in the context of your work. This helps you to easily connect the dots between people and information and take action. Now the underlying AI provides a useful baseline around topics and subject matter experts by organizing information into accessible knowledge within the apps and services you use every day. Think of it as a Wikipedia where AI does the first draft keeping it up-to-date, and then employees can collectively contribute their expertise to crowdsource knowledge across your organization, and knowledge managers can review and curate topic pages for accuracy.
-Next, Viva Connections is all about boosting employee engagement. Everyone from your everyday users, specific employee groups and departments through to your frontline workers. Expanding upon SharePoint home sites and news feeds as well as Yammer communities, it’s designed to give you a curated, company-branded experience that delivers personalized news, conversations, and other commonly used resources within the context of the apps you use every day, like Microsoft Teams.
-Then Viva Learning creates a central hub for individuals and teams to discover learning content and build new skills, all within the flow of their everyday work in Microsoft Teams. Viva Learning exposes recommended content by managers, experts, and peers to upskill employees. And they can also search for content that meets their specific needs as well as share and discuss learning, for example in chat. Content can come from learning providers, in-house custom materials stored in SharePoint, and your Learning Management Systems. Learning recommendations can also be surfaced directly inside of Viva Insights.
-Speaking of which, our fourth module, Viva Insights, leverages the MyAnalytics and Workplace Analytics foundation to deliver data-driven, privacy-protected insights and recommended actions to help individuals and teams improve productivity and wellbeing. So for individuals, Viva Insights offers actionable reminders for regular breaks and mindfulness activities in the flow of work, including integration with Headspace. Manager and leader insights provide visibility into work patterns that can lead to burnout and stress. And the new Glint dashboard helps leaders more accurately get a pulse of their organization by providing insight into the factors that impact engagement, so that they can take action.
-So, now that you know what the core experiences are, and the good news is, as part of the Microsoft 365 service, there’s little additional work required to implement Microsoft Viva for your organization. This is helped by the underlying AI together with Microsoft Graph that provides signals including the activities, relationships, and content spanning your organization to automate the delivery of core aspects of the experience. And you can extend this even further with SharePoint Syntex to transform your content into knowledge with AI-driven forms processing and document understanding. That said, while AI gives you a great starting point, you have control over curating, customizing, and targeting these experiences.
-Now while we’ll go deeper on the specific things you can do during the rest of the series, here are just some of the highlights. Differentiated experiences across roles, departments and geographies are managed using groups in Azure Active Directory to tailor what people see. For example, in Viva Connections you can build a branded and personalized experience, as well as target information using adaptive cards to reach specific groups. Another example is with Viva Topics, where you can do several things, such as targeting the experience toward specific audiences. As a Microsoft 365 admin, you have full control to configure access to Viva modules and experiences. For example, you can customize Viva Learning to curate online training from your preferred content providers and your own company-developed content so that it’s available all in one place. And you can also make Viva modules discoverable in the context of Microsoft Teams.
-Next, there are granular options to set up Viva Topics for your organization. For example, you can crowdsource information where everyone can create and edit topic pages, or you can establish additional governance oversight by assigning knowledge managers to review and curate topics for accuracy and suitability. For Viva Learning, Knowledge admins can also be assigned to curate learning content, and they can also feature specific learning content for everyone in the organization. And for Viva Insights, you can create custom policies to tailor personal experiences, while manager and leader insights are available to licensed users of Workplace Analytics.
-Now as you would expect, privacy and security are built into all Microsoft Viva experiences. And any information protection and compliance controls you’ve configured in Microsoft 365 are respected when you access content. For example, you can only see the files and documents you have permissions to see in Viva Topics and the same is true for Viva Connections. For Viva Topics, you can exclude topics by name and even entire sites that you don’t want the service to crawl as it builds the knowledge index. Additionally, Viva is GDPR compliant, for example personal experiences from Viva Insights are only visible to individuals. For manager and leader insights, safeguards like aggregation, de-identification, and differential privacy are built-in to protect individual privacy. Also, in Viva Topics you can unlist yourself as a topic expert to prevent others from contacting you. And in Viva Learning, recommended learning is only visible to the target employee and the person who made the recommendation.
-As an employee experience platform, of course Microsoft Viva is extensible. As we’ve shown, Viva builds on Microsoft Teams and Microsoft 365 platforms to provide the organizing layer for integrated employee experiences. This gives you an extensibility layer that ensures faster and broader integration with your existing tools and systems. Viva Topics and Viva Connections extensibility leverage the same patterns and practices of the familiar SharePoint Framework to build out custom experiences. For example, you can build custom web parts for your home site and topic pages. And you can use adaptive cards for your Viva Connections dashboard to expose specific content and target the right employees with the right resources. Experiences are infused with a strong and growing ecosystem of partners, and it’s designed to integrate with your existing systems. For example, you’ll be able to use Viva Learning connectors to integrate with popular Learning Management Systems to surface assigned or mandatory trainings. Also, LMS integration and APIs will be available via Microsoft Graph, so you can integrate your own custom solutions.
-Viva is also built to integrate across learning resources. So for example, Viva Learning brings in content from Skillsoft, PluralSight, Coursera, edX, LinkedIn Learning, Microsoft Learn, and Microsoft 365 training with even more on the way. And Viva Insights can pull in data from existing apps and services, such as Zoom and Slack or SAP SuccessFactors.
-So that was a quick overview of Microsoft Viva and how it brings together communications, knowledge, learning, resources, and insights into an integrated experience that empowers you and your teams to be your best from anywhere you work and in the tools you use every day. This is part one of our series explaining Microsoft Viva. Keep checking back to aka.ms/VivaMechanics for deep dives on all the Viva modules where we’ll show you how you to configure and setup the experiences. And you can learn more at aka.ms/Viva. Thanks for watching.
This article is contributed. See the original author and article here.
In this installment of the weekly discussion revolving around the latest news and topics on Microsoft 365, hosts – Vesa Juvonen (Microsoft) | @vesajuvonen, Waldek Mastykarz (Microsoft) | @waldekm are joined by Rhode Island, US-based, MVP, professional archer, blogger and presenter specializing in UI/UX, information architecture and user adoption at TrnDigital, D’arce Hess | @ DarceHess. Topics discussed in this session include: The path to IT and on becoming an MVP, reflections on UX/UI changes over the years and designing custom experiences that addresses business processes. In post pandemic times, organizations will be circling back to optimize Microsoft Teams experiences while vendors will continue efforts to land the right extensibility stories. Microsoft Viva – with great power comes great responsibility and ideas about prepping for Viva. Finally, thoughts on women in IT and on using what we learn in school in the field. Microsoft and the Community delivered 18 articles in this last week. This session was recorded on Monday, May 10, 2021.
Please remember to keep on providing us feedback on how we can help on this journey. We always welcome feedback on making the community more inclusive and diverse.
This episode was recorded on Monday, May 10, 2021.
These videos and podcasts are published each week and are intended to be roughly 45 – 60 minutes in length. Please do give us feedback on this video and podcast series and also do let us know if you have done something cool/useful so that we can cover that in the next weekly summary! The easiest way to let us know is to share your work on Twitter and add the hashtag#PnPWeekly. We are always on the lookout for refreshingly new content. “Sharing is caring!”
Here are all the links and people mentioned in this recording. Thanks, everyone for your contributions to the community!
Want to ask a question or in general engage with the community – Add a note in the Microsoft 365 PnP Community hub athttps://aka.ms/m365pnp/community
Check out all the great community calls, SDKs, and tooling for Microsoft 365 fromhttps://aka.ms/m365pnp
If you’d like to hear from a specific community member in an upcoming recording and/or have specific questions for Microsoft 365 engineering or visitors – please let us know. We will do our best to address your requests or questions.
This article is contributed. See the original author and article here.
Microsoft 365 Patterns and Practices (PnP) Community April 2021 update is out with a summary of the latest guidance, samples, and solutions from Microsoft or from the community for the community. This article is a summary of all the different areas and topics around the community work we do around Microsoft 365 ecosystem during the past month. Thank you for being part of this success. Sharing is caring!
What is Microsoft 365 Community (PnP)
Microsoft 365 PnP is a nick-name for Microsoft 365 platform community activities coordinated by numerous teams inside of the Microsoft 365 engineering organizations. PnP is a community-driven open source initiative where Microsoft and external community members are sharing their learning’s around implementation practices for Microsoft 365.
Topics vary from Microsoft Viva, Microsoft Graph, Microsoft Teams, OneDrive and SharePoint. Active development and contributions happen in GitHub by providing contributions to the samples, reusable components, and documentation for different areas. PnP is owned and coordinated by Microsoft engineering, but this is work done by the community for the community.
The initiative is facilitated by Microsoft, but we have multiple community members as part of the PnP team (see team details in end of the article) and we are always looking to extend the PnP team with more community members. Notice that since this is open source community initiative, so there’s no SLAs for the support for the samples provided through GitHub. Obviously, all officially released components and libraries are under official support from Microsoft.
We highly recommend also subscribing on the Microsoft 365 Developer Podcast show, which is a great show covering also latest development in the Microsoft 365 platform from developer and extensibility perspective.
Community Calls
There are numerous different community calls on different areas. All calls are being recorded and published either from Microsoft 365 Developer or Microsoft 365 Community (PnP) YouTube channels. Recordings are typically released within the following 24 hours after the call. You can find a detailed agenda and links to specific covered topics on blog post articles at the Microsoft 365 developer blog when the videos are published.
SharePoint https://aka.ms/spdev-call – Consists of the latest news, providing credits for all community contributors and live demos typically by SharePoint engineering.
M365 General Dev SIG https://aka.ms/spdev-sig-call – Bi-weekly – General topics on Microsoft 365 Dev from various aspects – Microsoft Teams, Microsoft Graph Toolkit, Provisioning, Automation, Scripting, Power Automate, Solution design
SharePoint Framework SIG https://aka.ms/spdev-spfx-call – Bi-weekly – Consists of topics around SharePoint Framework and JavaScript-based development in the Microsoft Teams and in SharePoint platform.
If you are interested in doing a live demo of your solution or sample in these calls, please do reach out to the PnP Team members (contacts later in this post) and they are able to help with the right setup. These are great opportunities to gain visibility for example for existing MVPs, for community members who would like to be MVPs in the future or any community member who’d like to share some of their learnings.
Microsoft 365 Community (PnP) Ecosystem in GitHub
Most of the community driven repositories are in the PnP GitHub organization as samples are not product specifics as they can contain numerous different solutions or the solution works in multiple different applications.
CLI Microsoft 365 – Cross-OS command line interface to manage Office 365 tenant settings
generator-spfx – Open-source Yeoman generator which extends the out-of-the-box Yeoman generator for SharePoint with additional capabilities
generator-teams – Open-source Microsoft Teams Yeoman generator – Bots, Messaging Extensions, Tabs, Connectors, Outgoing Web hooks and more
teams-dev-samples – Microsoft Teams targeted samples from community and Microsoft engineering
Sharing is Caring – Getting started on learning how to contribute and be active on the community from GitHub perspective.
pnpcore – The PnP Core SDK is an SDK designed to work against Microsoft 365 with Microsoft Graph API first approach
powershell – PnP PowerShell module which is PowerShell Core module targeted for Microsoft 365
pnpframework – PnP Framework is a .Net Standard 2.0 library targeting Microsoft 365 containing the PnP Provisioning engine and a ton of other useful extensions
What’s supportability story around the community tooling and assets?
Following statements apply across all of the community lead and contributed samples and solutions, including samples, core component(s) and solutions, like SharePoint Starter Kit, yo teams or PnP PowerShell. All Microsoft released SDKs and tools are supported based on the specific tool policies.
PnP guidance and samples are created by Microsoft & by the Community
PnP guidance and samples are maintained by Microsoft & community
PnP uses supported and recommended techniques
PnP is an open-source initiative by the community – people who work on the initiative for the benefit of others, have their normal day job as well
PnP is NOT a product and therefore it’s not supported by Premier Support or other official support channels
PnP is supported in similar ways as other open source projects done by Microsoft with support from the community by the community
There are numerous partners that utilize PnP within their solutions for customers. Support for this is provided by the Partner. When PnP material is used in deployments, we recommend being clear with your customer/deployment owner on the support model
Please see the specifics on the supportability on the tool, SDK or component repository or download page.
Microsoft 365 PnP team model
In April 2020 we announced our new Microsoft 365 PnP team model and grew the MVP team quite significantly. PnP model exists for having more efficient engagement between Microsoft engineering and community members. Let’s build things together. Your contributions and feedback is always welcome! During August, we also crew the team with 5 new members. PnP Team coordinates and leads the different open-source and community efforts we execute in the Microsoft 365 platform.
We welcome all community members to get involved on the community and open-source efforts. Your input do matter!
Got feedback, suggestions or ideas? – Please let us know. Everything we do in this program is for your benefit. Feedback and ideas are more than welcome so that we can adjust the process for benefitting you even more.
Area-specific updates
These are different areas which are closely involved on the community work across the PnP initiative. Some are lead and coordinated by engineering organizations, some are coordinated by the community and MVPs.
Microsoft Graph Toolkit
Microsoft Graph Toolkit is engineering lead initiative, which works closely with the community on the open-source areas. The Microsoft Graph Toolkit is a collection of reusable, framework-agnostic web components and helpers for accessing and working with Microsoft Graph. The components are fully functional right of out of the box, with built in providers that authenticate with and fetch data from Microsoft Graph.
All the latest updates on the Microsoft Graph Toolkit is being presented in our bi-weekly Microsoft 365 Generic Dev community call, including the latest community contributors.
Microsoft 365 Community docs
Community docs model was announced in the April 2020 and it’s great to see the interest for community to help each other by providing new guidance on the non-dev areas. See more on the announcement from the SharePoint blog – Announcing the Microsoft 365 Community Docs. We do welcome contributions from the community – our objective is to build a valuable location for articles from Microsoft and community together.
These are the updated SharePoint Framework samples which are available from the the different repositories.
New sample react-teams-membership-updater by Nick Brown which can be used to update the membership of a team based on the contents of a CSV file, can be hosted in a SharePoint site where a list can be defined for logging purposes or run inside teams as a personal app.
Updates to react-staffdirectory by Tristian O’Brien which is a web part shows the current user’s colleagues, and allows the user to search AD directory.
Updates to react-datatable by Chandani Prajapati which provides easy way to render SharePoint custom list in datatable view with all the necessary features.
Other to numerous SPFx web part and extension samples by our awesome community members!
How to find what’s relevant for you? Take advantage of our SharePoint Framework web part and extension sample galleries – includes also solutions which work in Microsoft Teams
These are samples which have been contributed on the community samples since last summary. We do welcome all Microsoft Teams samples to this gallery. They can be implemented using in any technology.
New sample msgext-bot-SPUploader by Sathya Raveendran and Varaprasad SSLN which is a document manager solution
New sample tab-activity-feed by Sébastien Levert (Microsoft) which shows on how to build a solution leveraging the Teams Activity Feed API to send notifications to other users
Updates to tab-sso by Shama which shows how to create a tab for Teams that uses the built-in Single Sign-On (SSO) capabilities
Numerous updates on the existing samples provided by community and Microsoft
New sample regex-functions by Geetha Sivasaiam and P3N on set of functions that performs regex match on currency, percent & time formats
New sample Tagbox by Carmen Ysewijn (Qubix) showing a textbox that adds items into a dynamic list
New sample Timesheet by April Dunnam (Microsoft) showing a timesheet application is a tablet-based canvas app that gives you a way to create and manage weekly timesheets
New sample calendar-component by April Dunnam (Microsoft) providing a re-usable component that allows you to display events in a calendar
The “Sharing Is Caring” imitative is targeted for learning the basics around making changes in Microsoft Docs, in GitHub, submitting pull requests to the PnP repositories and in GitHub in general. Take advantage of this instructor lead training for learning how to contribute to docs or to open-source solutions. Everyone is welcome to learn how to get started on contributing to open-source docs or code!
See more from the guidance documentation – including all upcoming instructor lead sessions which you can participate!
Different Microsoft 365 related open-source initiatives build together with the community
See exact details on the latest updates from the specific open-source project release notes. You can also follow up on the project updates from our community calls. There are numerous active projects which are releasing new versions with the community even on weekly basis. Get involved!
Microsoft Look Book – Discover the modern experiences you can build with SharePoint in Microsoft 365. Look book provides design examples for SharePoint Online which can be automatically provisioned to any tenant in the world. See more from https://lookbook.microsoft.com. This service is also provided as open-source solution sample from GitHub.
yo teams – Open-source Yeoman generator for Microsoft Teams extensibility. Supports creation of bots, messaging extensions, tabs (with SSO), connectors and outgoing Webhooks. See more from https://aka.ms/yoteams.
PnP Framework – .NET Standard 2.0 SDK containing the classic PnP Sites Core features for SharePoint Online. More around this package from GitHub.
PnP Core SDK – The PnP Core SDK is an SDK designed to work for Microsoft 365 with Graph API first approach. It provides a unified object model for working with SharePoint Online and Teams which is agnostic to the underlying API’s being called. See more around the SDK from documentation.
PnP PowerShell – PnP PowerShell is a .NET Core 3.1 / .NET Framework 4.6.1 based PowerShell Module providing over 400 cmdlets that work with Microsoft 365 environments and more specifically SharePoint Online and Microsoft Teams. See more details from documentation.
Reusable SharePoint Framework controls – Reusable controls for SharePoint Framework web part and extension development. Separate projects for React content controls and Property Pane controls for web parts. These controls are using Office UI Fabric React controls under the covers and they are SharePoint aware to increase the productivity of developers.
Office 365 CLI – Using the Office 365 CLI, you can manage your Microsoft Office 365 tenant and SharePoint Framework projects on any platform. See release notes for the latest updates.
PnPJs – PnPJs encapsulates SharePoint REST APIs and provides a fluent and easily usable interface for querying data from SharePoint sites. It’s a replacement of already deprecated pnp-js-core library. See changelog for the latest updates.
PnP Provisioning Engine and PnP CSOM Core – PnP provisioning engine is part of the PnP CSOM extension. They encapsulate complex business driven operations behind easily usable API surface, which extends out-of-the-box CSOM NuGet packages. See changelog for the latest updates.
PnP PowerShell – PnP PowerShell cmdlets are open-source complement for the SharePoint Online cmdlets. There are more than 300 different cmdlets to use and you can use them to manage tenant settings or to manipulate actual SharePoint sites. They See changelog for the latest updates.
PnP Modern Search solution – The PnP ‘Modern Search’ solution is a set of SharePoint Online modern Web Parts allowing SharePoint super users, webmasters and developers to create highly flexible and personalized search based experiences in minutes. See more details on the different supported capabilities from https://aka.ms/pnp-search.
Modernization tooling – All tools and guidance on helping you to transform you SharePoint to modern experiences from http://aka.ms/sppnp-modernize.
SharePoint Starter Kit v2 – Building modern experiences with Microsoft Teams flavors for SharePoint Online and SharePoint 2019 – reference solution in GitHub.
List formatting definitions – Community contributed samples around the column and view formatting in GitHub.
Site Designs and Site Scripts – Community contributed samples around SharePoint Site Designs and Site Scripts in GitHub.
DevOps tooling and scripts – Community contributed scripts and tooling automation around DevOps topics (CI/CD) in GitHub.
Teams provisioning solution – Set of open-source Azure Functions for Microsoft Teams provisioning. See more details from GitHub.
Documentation updates
Please see all the Microsoft 365 development documentation updates from the related documentation sets and repositories as listed below:
Microsoft 365 Dev and Microsoft 365 Community (PnP) YouTube video channels
You can find all Microsoft 365 related videos on our YouTube Channel at http://aka.ms/m365pnp-videos or at Microsoft 365 Dev. These channels contains already a significant amount of detailed training material, demo videos, and community call recordings.
Here are the new Microsoft demo or guidance videos released since the last monthly summary:
Here’s the list of active contributors (in alphabetical order) since last release details in GitHub repositories or community channels. PnP is really about building tooling and knowledge together with the community for the community, so your contributions are highly valued across the Microsoft 365 customers, partners and obviously also at Microsoft.
Thank you for your assistance and contributions on behalf of the community. You are truly making a difference! If we missed someone, please let us know.
Companies: Here’s the companies, which provided support the community initiative for this month by allowing their employees working for the benefit of others in the community. There were also people who contributed from other companies during last month, but we did not get their logos and approval to show them in time for these communications. If you still want your logo for this month’s release, please let us know and share the logo with us. Thx.
Microsoft people: Here’s the list of Microsoft people who have been closely involved with the PnP work during last month.
PnP Team manages the PnP community work in the GitHub and also coordinates different open-source projects around Microsoft 365 topics. PnP Team members have a significant impact on driving adoption of Microsoft 365 topics. They have shown their commitment to the open-source and community-driven work by constantly contributing to the benefit of the others in the community.
See all of the available community calls, tools, components and other assets from https://aka.ms/m365pnp. Get involved!
Got ideas or feedback on the topics to cover, additional partnerships, product feature capabilities? – let us know. Your input is important for us, so that we can support your journey in Microsoft 365.
This article is contributed. See the original author and article here.
Guest post by Lucas Liu Master’s student in Electrical & Computer Engineering at Duke University who specializes in Machine Learning & Federated Learning.
Throughout my time at university, I have built any number of scikit-learn, Tensorflow, or PyTorch machine learning models. I have developed and trained deep neural networks for applications ranging from cheetah footprint image classification to content similarity matching for bug tracking.
Many of these research projects have wound up more or less as Python scripts sitting on my PC, only existing locally. If a colleague or client wanted to use the product for themselves, this would involve a complicated series of file downloads, installation steps, dependency management, and more – a painful process even for researchers who are intimately knowledgeable with the technologies being used.
How can we eliminate this barrier between our research and our users? Microsoft Azure can help us transform machine learning research into a refined & easy-to-use product.
Containerize
First, we containerize our research with Docker. We take our ML model and package it into a simple flask app that serves predictions from POST requests with json payloads routed to `/predict`.
Here is an example Flask setup which predicts the probability of stroke with a scikit-learn ML model:
from flask import Flask, request
from flask.logging import create_logger
import logging
import pandas as pd
import joblib
app = Flask(__name__)
LOG = create_logger(app)
LOG.setLevel(logging.INFO)
@app.route("/")
def home():
html = "<h3>Stroke Prediction Home</h3>"
return html.format(format)
@app.route("/predict", methods=['POST'])
def predict():
"""Performs an sklearn prediction for stroke likelihood"""
json_payload = request.json
LOG.info(f"JSON payload: {json_payload}")
inference_payload = pd.DataFrame(json_payload)
LOG.info(f"inference payload DataFrame: {inference_payload}")
prediction = clf.predict_proba(inference_payload)[0][0]
statement = f'Probability of patient stroke is {prediction: .4f}'
return statement
if __name__ == "__main__":
clf = joblib.load("stroke_prediction.joblib")
app.run(host='0.0.0.0', port=8080, debug=True)
We then specify a Dockerfile configuration, which will handle requirements installation and run our Flask app. We will expose our container on port 80, and use Python slim, which is more lightweight.
A simple Docker configuration might look like this:
FROM python:3.8-slim
# Working Directory
WORKDIR /app
# Copy source code to working directory
COPY . app.py /app/
# Install packages from requirements.txt
RUN pip install --no-cache-dir --upgrade pip &&
pip install --no-cache-dir --trusted-host pypi.python.org -r requirements.txt
# Expose port 80
EXPOSE 80
# Run app.py at container launch
CMD ["python", "app.py"]
Build the container in ACR (let’s call our container image ‘stroke-predict’:
az acr build --registry mlproject --image stroke-predict .
Now, users can more easily access our ML model by quickly pulling, building, and running our image, without having to worry about dependencies or model running details. This helps us avoid the age old “But it runs on my machine!” problem.
Deploy & Operationalize
What if, instead of building a container image, users could simply hit a URL and perform inferences (no image setup required)? Let’s use Azure Kubernetes Service to serve our container at a ready-to-go endpoint:
First, we can use an Azure Pipeline Template to help us define a k8s deployment and load balancer YAML.
Now, we create an AKS cluster. This example cluster will have a load balancer, and the ability to autoscale between 1 and 5 nodes.
az aks create --resource-group mlproject --name mlproject
--generate-ssh-keys
--node-count 3
--vm-set-type VirtualMachineScaleSets
--load-balancer-sku standard
--enable-cluster-autoscaler
--min-count 1
--max-count 5
Next, Merge AKS credentials between kubectl and your AKS cluster
az aks get-credentials --resource-group mlproject --name mlproject
Attach our repository to the cluster
az aks update --resource-group mlproject --name mlproject --attach-acr mlproject
Deploy Application on Cluster
kubectl apply -f k8s/deployment.yaml
Apply Load Balancer
kubectl apply -f k8s/loadbalancer.yaml
Find IP for Endpoint
kubectl get services
Now our users can simply query our endpoint and receive predictions!
The power of Azure goes beyond just initial deployment. We can adopt Continuous Deployment practices with Github Actions for Azure to automatically trigger a new build each time a new version of the model is released, ensuring that the service endpoint is always providing the most up-to-date model.
Additionally, Azure’s autoscale feature allows our service to automatically scale up or down to meet real usage needs, activating additional resources during heavy usage time. We can even expand our AKS to reach users around the world. The Azure portal also allows us to monitor our AKS cluster metrics, and gain insights on the health and performance of the service.
Next Steps: MLOps
In this blog, we discuss how to transform your existing ML research models into a much more refined product with Azure’s Container Registry and Kubernetes Service, making it easy for users to access the fruits of your research.
However, if we start building with Azure from the very beginning, Azure’s MLOps offering provides an end-to-end solution for the ML life-cycle, from training and building the model initially, to continually retraining / redeploying an up-to-date service. Azure MLOps can even help us compare model performances & automatically detect data drift. This is just a small portion of what Azure’s MLOps can do –
This article is contributed. See the original author and article here.
Hello folks,
Today, there is increased scrutiny and demand for oversight on your data. Furthermore, the requirements dictated by laws and regulations present a growing set of challenges to your organisation.
For example, the ISO/IEC 27001:2013(E) Information technology — Security techniques — Information security management systems — Requirements states in section A.12.3.1
“Backup copies of information, software and system images shall be taken and tested regularly in accordance with an agreed backup policy.”
Therefore, if your enterprise is subject to that standard or is in the process of obtaining certification, you’ll need to prove to auditors that you have a process to validate compliance and remediate outliers. Azure Backup Center (ABC) gives you those capabilities.
**Please consult your compliance officer for information on the requirements your enterprise is subject to.
ABC on top of providing you a way to see all the Protectable Datasources that remain unprotected, provides you with a single location to define, assign and track Azure policies for backup across all your supported resources in Azure. Bringing your organization to your desired backup goal state through seamless integration with Azure Policy. Azure Policy allows you to track compliance against policies and create remediations when resources get “Non-compliant”.
Because ABC integrates so well with Azure Policy you can define and assign different policy to different scopes. When going through the assignment process, you’ll be able to:
Select the scope.
pick a management group,
or select a specific subscription,
and optionally select a resource group.
There are multiple built-in policies to cover backups, and multiple effects defined in these policies. You need to decide what to do with your non-compliant resources. Your compliance officer will help with this. Each possible response to a non-compliant resource is called an effect. The effect controls if the non-compliant resource is logged, blocked, has data appended, or has a deployment associated to it for putting the resource back into a compliant state. That way you have the control to verify compliance without making any changes. At least you know where you stand.
The code for these policies are stored in Github, you can fork that repo and modify the policies to make your own as you please.
There you go. The Azure Backup Center gives you the tools to protect your environment, to maintain and govern that protection across all your environments.
For more information, please see the documentation on docs.microsoft.com or using the links below.
Recent Comments