What does it mean to empower IT?

This article is contributed. See the original author and article here.

 


We’re excited to share a new animated video depicting common endpoint management tasks as they would unfold in a Microsoft Managed Desktop environment.



You’ve probably been waiting for this if you’ve seen our explainer videos sharing how we deliver on our service promises of fantastic device experiences and expert security monitoring.



This 4-minute “day in the life” narrative, gives you a glimpse of how Microsoft Managed Desktop empowers IT pros to add value to core business objectives.



Whether you’re a seasoned endpoint manager or a curious business professional, we think you’ll enjoy getting to know Remy, an IT pro responsible for provisioning, maintaining, protecting, and supporting more than 5,000 user devices in an enterprise environment.



How will Remy address the urgent requirements of stakeholders throughout the organization? How does Microsoft Managed Desktop streamline the effort?



You’ll have to watch the video.


 


 


Careful observers may also notice a few Easter eggs. Okay, more than a few. The IT experts in our engineering organization were deeply involved at every stage of production, and suffice it to say, we had fun putting this together.



What was your favorite part of the narrative? Did you catch any Easter eggs? Are there other scenarios you’d like to see Remy resolve in a future narrative? Please leave comments so our engineers know you appreciate their creativity and sense of humor.

Enhancing your Microsoft Teams experience with the apps you need

Enhancing your Microsoft Teams experience with the apps you need

This article is contributed. See the original author and article here.

Over the past year, the pandemic has dramatically changed the way we live and work. Organizations around the world adopted tools like Microsoft Teams to support working-from-home and hybrid work. Today, over 115 million people use Teams every day. And while video conferencing was a key driver for Teams rapid growth and adoption, our customers quickly realized the need to digitally transform beyond meetings to support a new way of…

The post Enhancing your Microsoft Teams experience with the apps you need appeared first on Microsoft 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Low code apps and bots in Microsoft Teams with Power Apps and Power Virtual Agents

Low code apps and bots in Microsoft Teams with Power Apps and Power Virtual Agents

This article is contributed. See the original author and article here.

Many Microsoft customers like Lumen Technologies and Etihad are embracing low code tools in Microsoft Teams to accelerate their digital transformation. Also, companies such as Office Depot and Suffolk turned to low code apps to quickly adapt to the changing realities of the world over the past year.


 


Today we are excited to announce the General Availability of the next evolution of low code tools in Teams with the new Power Apps and Power Virtual Agents apps to help our customer continue their low cod journey. Also generally available today is Microsoft Dataverse for Teams (formerly known as Project Oakdale), a low code built-in data platform for Teams.



Build custom low code apps without leaving Teams
The new Power Apps app for Teams allows users to build and deploy custom apps without leaving Teams. With the simple, embedded graphical app studio, it’s never been easier to build low code apps for teams. You can also harness immediate value from built in teams app templates like the great ideas or inspections apps, which can be deployed in one click and customized easily.



 


Soon, you’ll be able to distribute apps to anyone in your organization. People on your team can collaborate to build apps and then share with everyone in their organization by publishing in the Teams app store. Also, we’re excited to announce that the apps you build with Power Apps will be natively responsive across all devices for an enhanced end to end user experience.


 


Low code chatbots in Teams are here to help
Chatbots are a great way to improve how users get work done. They are especially useful in Teams as they conversational nature of these bots blend into the flow of the chats you have with your teammates – chatting with a bot is as simple as chatting with your colleague. Some common uses for these bots within an organization include IT helpdesk, HR self-service, and onboarding help. These bots help you free up time, allowing people to focus on more high-value work.


 


Power Virtual Agents empower people in your organization to make bots with a simple no code graphical user interface. Starting today, with the now generally available Power Virtual Agents app for Teams, you can make bots within Teams with the embedded bot studio. You can now deploy these bots to teams or your entire organizations with just a few clicks.



 


With this more approachable way to create and manage bots, subject matter experts can build and use bots to help address common department-level needs that maybe weren’t substantial enough to get addressed by busy IT departments. Customers with select Office 365 subscriptions that include Power Apps and Power Automate will have access to Power Virtual Agents’ capabilities for Teams at no additional cost.



Dataverse for Teams – A new low code data platform for Teams
The new Power Apps and Power Virtual Agents apps for Teams can be backed by a new relational datastore – Dataverse for Teams. Dataverse for Teams provides a subset of the full Microsoft Dataverse (formerly known as CDS) capabilities but more than enough to get started building apps and bots for your organizations. Dataverse for Teams also improves application lifecycle management, allowing customers seamless upgrades to more robust offerings when their apps and data outgrow what comes with their Office 365 or Microsoft 365 licenses.



 


Also, IT professionals and Professional developers, can now use the Azure API Management connector and connect to data stored on Azure as part of their Microsoft and Office 365 licenses*.



Admin, security, and governance for your low code solutions in Teams
Microsoft Dataverse for Teams follows existing data governance rules established by the Power Platform and enables access control in the Teams Admin Center like any other Teams feature. Within the Teams Admin center, you can allow or block apps created by users at the individual level, group level, or org level – you can learn more here. The Power Platform admin center provides more detail on Power Platform solutions use and control, including monitoring dedicated capacity utilization.


 


Accelerate your transformation with Teams and Power Platform
We hope you are ready to start your Power Apps and Power Virtual Agents journey. For additional information on the content shared above and on the latest Teams platform news, please visit:



 

*Azure costs still apply

Azure HPC @Supercomputing '20

This article is contributed. See the original author and article here.

Welcome to our SC ’20 virtual events blog.  In this blog we share all the related Microsoft content at SC ’20 to help you discover relevant content more quickly.


 


Note: There are 2 categories for SC ’20 registration.   The virtual booth and linked sessions from our booth are free to access, but some of the content, such as the Quantum workshop, require a paid registration.


 


Technical content, workshops, and more


Visualizing High-Level Quantum Programs


Wednesday, Nov 11th | 12 PM – 12:20 PM ET


Complex quantum programs will require programming frameworks with many of the same features as classical software development, including tools to visualize the behavior of programs and diagnose any issues encountered. We present new visualization tools being added to the Microsoft Quantum Development Kit (QDK) for visualizing the execution flow of a quantum program at each step during its execution. These tools allow for interactive visualization of the control flow of a high-level quantum program by tracking and rendering individual execution paths through the program. Also presented is the capability to visualize the states of the quantum registers at each step during the execution of a quantum program, which allows for more insight into the detailed behavior of a given quantum algorithm. We also discuss the extensibility of these tools, allowing developers to write custom visualizers to depict states and operations for their high-level quantum programs. We believe that these tools have potential value for experienced developers and researchers, as well as for students and newcomers to the field who are looking to explore and understand quantum algorithms interactively.


 


Exotic Computation and System Technology: 2006, 2020 and 2035


Tuesday, Nov 17th | 11:45 – 1:15PM ET


SC06 introduced the concept of “Exotic Technologies” (http://sc06.supercomputing.org/conference/exotic_technologies.php) to SC. The exotic system panel session predicted storage architectures for 2020. Panelists posed one set of technology to define complete systems and their performance. The audience voted for the panelist with the most compelling case and awarded a bottle of wine to the highest vote. The SC20 panel “closes the loop” for the predictions; what actually happened; and proposes to continue the activity by predicting what will be available for computing systems in 2025, 2030 and 2035.

In the SC20 panel, we will open the SC06 “time capsule” that has been “buried” under the raised floor in the NERSC Oakland Computer Facility. We will take another vote by the audience for consensus on the prediction closest to where we are today. The panelist with the highest vote tally will win the good, aged wine.


 


Lessons Learned from Massively Parallel Model of Ventilator Splitting


Tuesday, Nov 17th | 1:30 – 2:00 PM ET


There has been a pressing need for an expansion of the ventilator capacity in response to the recent COVID19 pandemic. To help address this need, a patient-specific airflow simulation was developed to support clinical decision-making for efficacious and safe splitting of a ventilator among two or more patients with varying lung compliances and tidal volume requirements. The computational model provides guidance regarding how to split a ventilator among two or more patients with differing respiratory physiologies. There was a need to simulate hundreds of millions of different clinically relevant parameter combinations in a short time. This task, driven by the dire circumstances, presented unique computational and research challenges. In order to support FDA submission, a large-scale and robust cloud instance was designed and deployed within 24 hours, and 800,000 compute hours were utilized in a 72-hour period.


 


ZeRO: Memory Optimizations Toward Training Trillion Parameter Models


Tuesday, Nov 17th | 1:30 – 2:00 P ET


Large deep learning models offer significant accuracy gains, but training billions of parameters is challenging. Existing solutions exhibit fundamental limitations fitting these models into limited device memory, while remaining efficient. Our solution uses ZeroRedundancy Optimizer (ZeRO) to optimize memory, vastly improving throughput while increasing model size. ZeRO eliminates memory redundancies allowing us to scale the model size in proportion to the number of devices with sustained high efficiency. ZeRO can scale beyond 1 trillion parameters using today’s hardware. 

Our implementation of ZeRO can train models of over 100b parameters on 400 GPUs with super-linear speedup, achieving 15 petaflops. This represents an 8x increase in model size and 10x increase in achievable performance. ZeRO can train large models of up to 13b parameters without requiring model parallelism (which is harder for scientists to apply). Researchers have used ZeRO to create the world’s largest language model (17b parameters) with record breaking accuracy.


 


Distributed Many-to-Many Protein Sequence Alignment using Sparse Matrices


Wednesday, Nov 18th | 4:00 – 4:30 PM ET


Identifying similar protein sequences is a core step in many computational biology pipelines such as detection of homologous protein sequences, generation of similarity protein graphs for downstream analysis, functional annotation and gene location. Performance and scalability of protein similarity searches have proven to be a bottleneck in many bioinformatics pipelines due to increases in cheap and abundant sequencing data. This work presents a new distributed-memory software, PASTIS. PASTIS relies on sparse matrix computations for efficient identification of possibly similar proteins. We use distributed sparse matrices for scalability and show that the sparse matrix infrastructure is a great fit for protein similarity searches when coupled with a fully-distributed dictionary of sequences that allows remote sequence requests to be fulfilled. Our algorithm incorporates the unique bias in amino acid sequence substitution in searches without altering the basic sparse matrix model, and in turn, achieves ideal scaling up to millions of protein sequences.


 


HPC Agility in the Age of Uncertainty


Thursday, Nov19 | 10:00 – 11:30 AM ET


In the disruption and uncertainty of the 2020 pandemic, challenges surfaced that caught some companies off guard. These came in many forms including distributed teams without access to traditional workplaces, budget constraints, personnel reductions, organizational focus and company forecasts. The changes required a new approach to HPC and all systems that enable and optimize workforces to function efficiently and productively. Resources needed to be agile, have the ability to pivot quickly, scale, enable collaboration and be accessed from virtually anywhere.

So how did the top companies respond? What solutions were most effective and what can be done to safeguard against future disruptions? This panel asks experts in various fields to share their experiences and ideas for the future of HPC.


 


Azure HPC SC ’20 Virtual Booth and related Sessions


You can access the Microsoft SC ’20 virtual booth .  The on-demand sessions listed below can be easily found from our booth along with additional links and content sources.


 


On-Demand Sessions at SC20


We’ve prepared a number of recorded sessions to share our perspectives about where HPC behaviors are heading juxtaposed with the increase of AI development and edge-based, real-time machine learning.  Don’t miss out!


 
























HPC, AI, and the Cloud


Steve Scott, Technical Fellow and CVP Hardware Architecture for Microsoft Azure, opines on how the cloud has evolved to support the massive computational models across HPC and AI workloads that, previously, has only been possible with dedicated on-premises solutions or supercomputing centers.



Azure HPC Software Overview


Rob Futrick, Principal Program Manager, Azure HPC gives an overview of the Azure HPC software platform, including Azure Batch and Azure CycleCloud, and demonstrates how to use Azure CycleCloud to create and use an autoscaling Slurm HPC cluster in minutes.



Running Quantum Programs at Scale through an Open-Source, Extensible Framework 


We present an addition to the Q# infrastructure that enables analysis and optimizations of quantum programs and adds the ability to bridge various backends to execute quantum programs. While integration of Q# with external libraries has been demonstrated earlier (e.g., Q# and NWChem [1]), it is our hope that the new addition of a Quantum Intermediate Representation (QIR) will enable the development of a broad ecosystem of software tools around the Q# language. As a case in point, we present the integration of the density-matrix based simulator backend DM-Sim [2] with Q# and the Microsoft Quantum Development Kit. Through the future development and extension of QIR analyses, transformation tools, and backends, we welcome user support and feedback in enhancing the Q# language ecosystem.



Cloud Supercomputing with Azure and AMD


In this session Jason Zander (EVP – Microsoft), and Lisa Su (CEO – AMD) as well as Azure HPC customers talk about the progress being made with HPC in the cloud.  Azure and AMD reflect on their strong partnership, highlight advancements being made in Azure HPC and express mutual optimism of future technologies from AMD and corresponding advancements in Azure HPC.



Accelerate your Innovation with AI at Scale


Nidhi Chappell, Head of Product and Engineering at Microsoft #Azure HPC, shares a new approach to AI that is all about lowering barriers and accelerating AI innovation, enabling developers to re-imagine what’s possible and employees to achieve their potential with the apps and services they use every day.



Azure HPC Platform At-A-Glance


A thorough yet condensed walkthrough of the entire Azure HPC stack with Rob Futrick, Principal Program Manager for Azure HPC, Evan Burness, Principal Program Manager for Azure HPC, Ian Finder, Sr. Product Marketing Manager for Azure HPC, and Scott Jeschonek, Principal Program Manager for Azure Storage.



 


Microsoft in Intel Sessions


HPC in the Cloud – Bright Future


Nidhi Chappell, Head of Product and Engineering at Microsoft #Azure HPC, is a panelist in Leaders of HPC in the cloud that discusses the future of HPC, the opportunities cloud can offer and the challenges ahead.


Learn how NXP Semiconductors is planning to extend silicon design workloads to Azure cloud using Intel based virtual machines
When NXP Semiconductors design silicon for demanding automotive, communication and IOT use cases, NXP needs performance, security and cost management in their HPC workloads. In this fireside chat, you will hear how NXP uses their own datacenters and plan to utilize Intel VMs at Azure cloud to meet their shifting demands for Electronic Design Automation (EDA) workloads.


 


Intel sessions will be copied to their HPCwire microsite on Nov 17th.


 

Azure HPC Videos

This article is contributed. See the original author and article here.

Strategic investments bolster versatility and relevance of Azure HPC for Microsoft


 


As I look at SC20 quickly approaching on my calendar, I cannot help but reflect on what has happened since the last one.  And there is a lot to think about.  So many changes to our lifestyle, our environment, and our safety.  But, despite everything that has happened over the last year, we continue to be amazed per our customers and people who continually put their best foot forward to drive and solve complex problems across all industries.


 


Overview


You’ll hear us refer to Azure HPC as purpose-built.  What we mean is that, instead of figuring out how to apply commodity hardware toward complex HPC & AI workloads, Azure’s formula for HPC/AI customer success starts with infrastructure that is purpose-built for scalable HPC & AI workloads. For example, our solutions can deliver a 10x performance or higher advantage compared to products elsewhere on the public cloud, which gives us a really strong foundation of performance and scale leadership. We ensure this level of performance by aligning with partners like Intel, so our HPC instances provide the most powerful architecture available to accelerate both HPC and AI workloads. Our team of HPC experts then add on a broad set of Azure technologies to leverage that infrastructure in an agile and secure manner. That’s what leads to big impact for customers. These are the pillars of HPC & AI on Azure.  Some great examples are customers who are modelling complex problems like drug discovery, weather simulation, crash test analysis, and state of the art AI training.


 


To better understand Microsoft’s vision and strategy that led to their HPC investments, check out the video links below:


 


Azure HPC Overview, featuring Nidhi Chappell, Head of Product and Engineering at Azure HPC, summarizes Microsoft’s recent investments into building their comprehensive HPC cloud platform.


 


Azure HPC Vision, featuring Andrew Jones, Lead, Future HPC & AI Capabilities at Microsoft, outlines how we see the future of HPC and its convergence with AI in the cloud.


 


Industry Alignment


Many people will adapt their behaviors to capitalize on new capabilities offered by technology advancements.  Often times, the technologies themselves will establish new lifestyle patterns and practices.  But in HPC, it is the other way around.  Technologies must clearly identify, align, and support the traditional behaviors of HPC engineers and service managers in order to be deemed relevant or useful.  This is one reason the ramp to HPC in the cloud has been slower than some expected…cloud platforms have taken a long time to become engineered in line and in step with how veteran HPC organizations expect to be able to work.  


 


These videos illustrate a much deeper understanding and alignment of Azure HPC relevance to specific application workloads within known industry verticals:


 


Financial Services


Link to video: How to bring your risk workloads to Azure


Presented by: Stephen Richardson, EMEA HPC & AI Technology Specialist at Microsoft, and Greg Ulepic, FSI Lead for Risk Workloads at Microsoft.


Summary: The pandemic has not just hit our health, but also our pockets.  Financial institutions need to re-invent how they help their clients mitigate and manage risk across their portfolios.  To do that, many institutions are examining both a movement of their workload to the cloud as well as setting up native cloud-based architectures to operate moving forward.  This video illustrates how the financial services marketplace is evolving towards cloud-driven solutions, and how Azure is well-furnished to support that model.


 


Autonomous Vehicle Development


Link to video: Accelerating Autonomous Vehicle Development


Presented by: Kurt Niebuhr, Principal Program Manager for Azure HPC


Summary: There’s few other areas in modern digital transformation that have more buzz than self-driving cars.  Given the miniscule room for error when it comes to driving safely, the very notion of self-driving cars can easily raise concerns even in the most liberal mind.  In this video, Kurt Niebuhr discusses how Microsoft Azure provides an end-to-end autonomous driving development and engineering solution with Azure HPC, giving carmakers an excellent understanding of how they need to plan for the next phases of automobile development and manufacturing.


 


Academic Research


Link to video: Building a cloud-based HPC environment for the research community


Presented by: Tim Carroll, Director of HPC and AI for Research for Microsoft Azure


Summary:  The global community of researchers has come into greater spotlight recently with the rise in large scale health and environmental issues.  When data scientists and researchers are incumbered by IT limitations, they are being denied their potential to innovate and discover new remedies.  This video illustrates how Microsoft is better enabling universities and research institutions around the world to massively speed up their processing tasks without building out their hardware data centers, thereby eliminating barriers of limitation for scientists and researchers everywhere.


 


Energy


Link to video: Empowering exploration and production with Azure


Presented by: Hussein Shel, Principal Program Manager for Azure Global Energy


Summary:  One of the biggest areas of digital transformation in energy is optimizing how companies explore and discover new untapped reservoirs of fossil fuels.  Additionally, the energy industry as a whole has been put under a giant microscope in recent decades with more voices calling for a phase out of fossil fuel production and transitioning to clean & renewable energy.  These are all initiatives that can be enabled by Azure HPC.  In this video, Hussein Shel discusses some of Azure HPC’s offerings in exploration and production for the oil and gas industry.


 


Semiconductor Engineering


Link to video: Azure accelerates cloud transformation for the Semiconductor Industry


Presented by: Prashant Varshney, Sr. Director, Product Management for Azure Engineering


Summary:  The semiconductor industry is the best example of technology dependence on itself.  The pace at which micro-processing architectures increase in density and capability has direct impact on the viability of entire product and service ecosystems.  As such, silicon engineering & manufacturing companies must be highly capable, yet highly nimble, with their electronic design automation (EDA) to remain competitive. In this video, Prashant Varshney outlines how Azure HPC has been built with some of these considerations in mind, touching on how semiconductor companies can build the best chips by using Azure for their EDA processes.


 


Manufacturing


Link to video: Microsoft Azure HPC discrete Manufacturing


Presented by: Karl Podesta, HPC Technical Specialist, AzureCAT


Summary:  Manufacturers are being asked to visualize real-time products and product performance from anywhere.  We also need to understand what’s happening more with physical products and use things like digital twins to really understand what could happen in those products and to really optimize the products.  In this session we will discuss how customers are leveraging Azure HPC to solve these types of challenges in the Manufacturing industry.


 


Guides & Demos


The Azure HPC stack is comprehensive and full featured: purpose-built infrastructure, high performing storage options, fast & secure networking, workload orchestration services, and data science tools integrated across cloud, hybrid, and the edge make Azure a force with which to be reckoned.  It is important to see where your opportunity is in this stack, to understand what Azure can do for your HPC needs.


 


The following videos are excellent deep dives into some of the infrastructure and services well-aligned for HPC use case scenarios:


 


Azure HPC Software Overview


Presented by: Rob Futrick, Principal Program Manager, Azure HPC


Summary:  This is an excellent summary of all the HPC oriented services residing on Azure, and how you can take advantage of them for your needs.


 


Azure Batch with Containers


Presented by: Mike Kiernan, Sr. Program Manager for Azure HPC


Summary:  Learn how you can use the Azure Batch Service with Containers to better orchestrate your workload in the cloud


 


Deploy an end-to-end environment with Azure Cycle Cloud


Presented by: Cormac Garvey, Sr. Program Manager for Azure HPC


Summary:  This video outlines the basics for you to be able to stand up an end-to-end HPC environment using Azure CycleCloud.


 


Storage options for HPC Workloads


Presented by: Scott Jeschonek, Cloud Storage Specialist at Microsoft Azure


Summary: This video provides an overview of a variety of different storage options to run HPC workloads on including Blob Storage, Azure NetApp Files, Azure HPC Cache, and others.


 


Things are shaping up to be a very productive 2021, and we are tremendously excited and honored to participate!


 


#azurehpc