This article is contributed. See the original author and article here.

Welcome to our SC ’20 virtual events blog.  In this blog we share all the related Microsoft content at SC ’20 to help you discover relevant content more quickly.


 


Note: There are 2 categories for SC ’20 registration.   The virtual booth and linked sessions from our booth are free to access, but some of the content, such as the Quantum workshop, require a paid registration.


 


Technical content, workshops, and more


Visualizing High-Level Quantum Programs


Wednesday, Nov 11th | 12 PM – 12:20 PM ET


Complex quantum programs will require programming frameworks with many of the same features as classical software development, including tools to visualize the behavior of programs and diagnose any issues encountered. We present new visualization tools being added to the Microsoft Quantum Development Kit (QDK) for visualizing the execution flow of a quantum program at each step during its execution. These tools allow for interactive visualization of the control flow of a high-level quantum program by tracking and rendering individual execution paths through the program. Also presented is the capability to visualize the states of the quantum registers at each step during the execution of a quantum program, which allows for more insight into the detailed behavior of a given quantum algorithm. We also discuss the extensibility of these tools, allowing developers to write custom visualizers to depict states and operations for their high-level quantum programs. We believe that these tools have potential value for experienced developers and researchers, as well as for students and newcomers to the field who are looking to explore and understand quantum algorithms interactively.


 


Exotic Computation and System Technology: 2006, 2020 and 2035


Tuesday, Nov 17th | 11:45 – 1:15PM ET


SC06 introduced the concept of “Exotic Technologies” (http://sc06.supercomputing.org/conference/exotic_technologies.php) to SC. The exotic system panel session predicted storage architectures for 2020. Panelists posed one set of technology to define complete systems and their performance. The audience voted for the panelist with the most compelling case and awarded a bottle of wine to the highest vote. The SC20 panel “closes the loop” for the predictions; what actually happened; and proposes to continue the activity by predicting what will be available for computing systems in 2025, 2030 and 2035.

In the SC20 panel, we will open the SC06 “time capsule” that has been “buried” under the raised floor in the NERSC Oakland Computer Facility. We will take another vote by the audience for consensus on the prediction closest to where we are today. The panelist with the highest vote tally will win the good, aged wine.


 


Lessons Learned from Massively Parallel Model of Ventilator Splitting


Tuesday, Nov 17th | 1:30 – 2:00 PM ET


There has been a pressing need for an expansion of the ventilator capacity in response to the recent COVID19 pandemic. To help address this need, a patient-specific airflow simulation was developed to support clinical decision-making for efficacious and safe splitting of a ventilator among two or more patients with varying lung compliances and tidal volume requirements. The computational model provides guidance regarding how to split a ventilator among two or more patients with differing respiratory physiologies. There was a need to simulate hundreds of millions of different clinically relevant parameter combinations in a short time. This task, driven by the dire circumstances, presented unique computational and research challenges. In order to support FDA submission, a large-scale and robust cloud instance was designed and deployed within 24 hours, and 800,000 compute hours were utilized in a 72-hour period.


 


ZeRO: Memory Optimizations Toward Training Trillion Parameter Models


Tuesday, Nov 17th | 1:30 – 2:00 P ET


Large deep learning models offer significant accuracy gains, but training billions of parameters is challenging. Existing solutions exhibit fundamental limitations fitting these models into limited device memory, while remaining efficient. Our solution uses ZeroRedundancy Optimizer (ZeRO) to optimize memory, vastly improving throughput while increasing model size. ZeRO eliminates memory redundancies allowing us to scale the model size in proportion to the number of devices with sustained high efficiency. ZeRO can scale beyond 1 trillion parameters using today’s hardware. 

Our implementation of ZeRO can train models of over 100b parameters on 400 GPUs with super-linear speedup, achieving 15 petaflops. This represents an 8x increase in model size and 10x increase in achievable performance. ZeRO can train large models of up to 13b parameters without requiring model parallelism (which is harder for scientists to apply). Researchers have used ZeRO to create the world’s largest language model (17b parameters) with record breaking accuracy.


 


Distributed Many-to-Many Protein Sequence Alignment using Sparse Matrices


Wednesday, Nov 18th | 4:00 – 4:30 PM ET


Identifying similar protein sequences is a core step in many computational biology pipelines such as detection of homologous protein sequences, generation of similarity protein graphs for downstream analysis, functional annotation and gene location. Performance and scalability of protein similarity searches have proven to be a bottleneck in many bioinformatics pipelines due to increases in cheap and abundant sequencing data. This work presents a new distributed-memory software, PASTIS. PASTIS relies on sparse matrix computations for efficient identification of possibly similar proteins. We use distributed sparse matrices for scalability and show that the sparse matrix infrastructure is a great fit for protein similarity searches when coupled with a fully-distributed dictionary of sequences that allows remote sequence requests to be fulfilled. Our algorithm incorporates the unique bias in amino acid sequence substitution in searches without altering the basic sparse matrix model, and in turn, achieves ideal scaling up to millions of protein sequences.


 


HPC Agility in the Age of Uncertainty


Thursday, Nov19 | 10:00 – 11:30 AM ET


In the disruption and uncertainty of the 2020 pandemic, challenges surfaced that caught some companies off guard. These came in many forms including distributed teams without access to traditional workplaces, budget constraints, personnel reductions, organizational focus and company forecasts. The changes required a new approach to HPC and all systems that enable and optimize workforces to function efficiently and productively. Resources needed to be agile, have the ability to pivot quickly, scale, enable collaboration and be accessed from virtually anywhere.

So how did the top companies respond? What solutions were most effective and what can be done to safeguard against future disruptions? This panel asks experts in various fields to share their experiences and ideas for the future of HPC.


 


Azure HPC SC ’20 Virtual Booth and related Sessions


You can access the Microsoft SC ’20 virtual booth .  The on-demand sessions listed below can be easily found from our booth along with additional links and content sources.


 


On-Demand Sessions at SC20


We’ve prepared a number of recorded sessions to share our perspectives about where HPC behaviors are heading juxtaposed with the increase of AI development and edge-based, real-time machine learning.  Don’t miss out!


 
























HPC, AI, and the Cloud


Steve Scott, Technical Fellow and CVP Hardware Architecture for Microsoft Azure, opines on how the cloud has evolved to support the massive computational models across HPC and AI workloads that, previously, has only been possible with dedicated on-premises solutions or supercomputing centers.



Azure HPC Software Overview


Rob Futrick, Principal Program Manager, Azure HPC gives an overview of the Azure HPC software platform, including Azure Batch and Azure CycleCloud, and demonstrates how to use Azure CycleCloud to create and use an autoscaling Slurm HPC cluster in minutes.



Running Quantum Programs at Scale through an Open-Source, Extensible Framework 


We present an addition to the Q# infrastructure that enables analysis and optimizations of quantum programs and adds the ability to bridge various backends to execute quantum programs. While integration of Q# with external libraries has been demonstrated earlier (e.g., Q# and NWChem [1]), it is our hope that the new addition of a Quantum Intermediate Representation (QIR) will enable the development of a broad ecosystem of software tools around the Q# language. As a case in point, we present the integration of the density-matrix based simulator backend DM-Sim [2] with Q# and the Microsoft Quantum Development Kit. Through the future development and extension of QIR analyses, transformation tools, and backends, we welcome user support and feedback in enhancing the Q# language ecosystem.



Cloud Supercomputing with Azure and AMD


In this session Jason Zander (EVP – Microsoft), and Lisa Su (CEO – AMD) as well as Azure HPC customers talk about the progress being made with HPC in the cloud.  Azure and AMD reflect on their strong partnership, highlight advancements being made in Azure HPC and express mutual optimism of future technologies from AMD and corresponding advancements in Azure HPC.



Accelerate your Innovation with AI at Scale


Nidhi Chappell, Head of Product and Engineering at Microsoft #Azure HPC, shares a new approach to AI that is all about lowering barriers and accelerating AI innovation, enabling developers to re-imagine what’s possible and employees to achieve their potential with the apps and services they use every day.



Azure HPC Platform At-A-Glance


A thorough yet condensed walkthrough of the entire Azure HPC stack with Rob Futrick, Principal Program Manager for Azure HPC, Evan Burness, Principal Program Manager for Azure HPC, Ian Finder, Sr. Product Marketing Manager for Azure HPC, and Scott Jeschonek, Principal Program Manager for Azure Storage.



 


Microsoft in Intel Sessions


HPC in the Cloud – Bright Future


Nidhi Chappell, Head of Product and Engineering at Microsoft #Azure HPC, is a panelist in Leaders of HPC in the cloud that discusses the future of HPC, the opportunities cloud can offer and the challenges ahead.


Learn how NXP Semiconductors is planning to extend silicon design workloads to Azure cloud using Intel based virtual machines
When NXP Semiconductors design silicon for demanding automotive, communication and IOT use cases, NXP needs performance, security and cost management in their HPC workloads. In this fireside chat, you will hear how NXP uses their own datacenters and plan to utilize Intel VMs at Azure cloud to meet their shifting demands for Electronic Design Automation (EDA) workloads.


 


Intel sessions will be copied to their HPCwire microsite on Nov 17th.


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.