NVIDIA Ships World’s Most Advanced AI System — NVIDIA DGX A100 — to Fight COVID-19; Third-Generation DGX Packs Record 5 Petaflops of AI Performance

Training, Inference, Data Analytics Unified on One Platform;
Each System Configurable from One to 56 Independent GPUs to Deliver 
Elastic, Software-Defined Data Center Infrastructure

SANTA CLARA, Calif., May 14, 2020 (GLOBE NEWSWIRE) -- NVIDIA today unveiled NVIDIA DGX™ A100, the third generation of the world’s most advanced AI system, delivering 5 petaflops of AI performance and consolidating the power and capabilities of an entire data center into a single flexible platform for the first time.

Immediately available, DGX A100 systems have begun shipping worldwide, with the first order going to the U.S. Department of Energy’s (DOE) Argonne National Laboratory, which will use the cluster’s AI and computing power to better understand and fight COVID-19.

“NVIDIA DGX A100 is the ultimate instrument for advancing AI,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA DGX is the first AI system built for the end-to-end machine learning workflow — from data analytics to training to inference. And with the giant performance leap of the new DGX, machine learning engineers can stay ahead of the exponentially growing size of AI models and data.”

DGX A100 systems integrate eight of the new NVIDIA A100 Tensor Core GPUs, providing 320GB of memory for training the largest AI datasets, and the latest high-speed NVIDIA Mellanox® HDR 200Gbps interconnects.

Multiple smaller workloads can be accelerated by partitioning the DGX A100 into as many as 56 instances per system, using the A100 multi-instance GPU feature. Combining these capabilities enables enterprises to optimize computing power and resources on demand to accelerate diverse workloads, including data analytics, training and inference, on a single, fully integrated, software-defined platform.

Immediate DGX A100 Adoption, Support
A number of the world’s largest companies, service providers and government agencies have placed initial orders for the DGX A100, with the first systems delivered to Argonne earlier this month.

“We’re using America’s most powerful supercomputers in the fight against COVID-19, running AI models and simulations on the latest technology available, like the NVIDIA DGX A100,” said Rick Stevens, associate laboratory director for Computing, Environment and Life Sciences at Argonne. “The compute power of the new DGX A100 systems coming to Argonne will help researchers explore treatments and vaccines and study the spread of the virus, enabling scientists to do years’ worth of AI-accelerated work in months or days.”

The University of Florida will be the first institution of higher learning in the U.S. to receive DGX A100 systems, which it will deploy to infuse AI across its entire curriculum to foster an AI-enabled workforce.

“The University of Florida has a vision to be a national leader in artificial intelligence, and NVIDIA is an incredibly valuable partner in our quest to do so,” said University of Florida President Kent Fuchs. “Across disciplines, our new NVIDIA DGX A100 systems will position our researchers to solve some of our world’s most pressing challenges and equip an entire generation of students with the skills that will revolutionize the future workforce.”

Among other early adopters are:

  • The Center for Biomedical AI — at the University Medical Center Hamburg-Eppendorf, Germany — will leverage DGX A100 to advance clinical decision support and process optimization.
  • Chulalongkorn University — Thailand’s top research-intensive university — will use DGX A100 to accelerate its pioneering research such as Thai natural language processing, automatic speech recognition, computer vision and medical imaging.
  • Element AI — a Montreal-based developer of AI-powered solutions and services — is deploying DGX A100 to accelerate performance and feature optimization for its Orkestrator GPU scheduler to meet growing AI training and application demands.
  • German Research Center for Artificial Intelligence (DFKI) will use the DGX A100 systems to further accelerate its research on new deep learning methods and their explainability while significantly reducing space and energy consumption.
  • Harrison.ai — a Sydney-based healthcare AI company — will deploy Australia’s first DGX A100 systems to accelerate the development of its AI-as-medical-device.
  • The UAE Artificial Intelligence Office — first in the Middle East to deploy the new DGX A100 — is building a national infrastructure to accelerate AI research, development and adoption across the public and private sector.
  • VinAI Research — Vietnam’s leading AI research lab, based in Hanoi and Ho Chi Minh City — will use DGX A100 to conduct high-impact research and accelerate the application of AI.

Thousands of previous-generation DGX systems are in use around the globe by a wide range of public and private organizations. Among them are some of the world’s leading businesses, including automakers, healthcare providers, retailers, financial institutions and logistics companies that are pushing AI forward across their industries.

NVIDIA Builds Next-Gen 700 Petaflops DGX SuperPOD
NVIDIA also revealed its next-generation DGX SuperPOD, a cluster of 140 DGX A100 systems capable of achieving 700 petaflops of AI computing power. Combining 140 DGX A100 systems with Mellanox HDR 200Gbps InfiniBand interconnects, NVIDIA built the DGX SuperPOD AI supercomputer for internal research in areas such as conversational AI, genomics and autonomous driving.

The cluster is one of the world’s fastest AI supercomputers — achieving a level of performance that previously required thousands of servers. The enterprise-ready architecture and performance of the DGX A100 enabled NVIDIA to build the system in less than a month, instead of taking months or years of planning and procurement of specialized components previously required to deliver these supercomputing capabilities.

To help customers build their own A100-powered data centers, NVIDIA has released a new DGX SuperPOD reference architecture. It gives customers a blueprint that follows the same design principles and best practices NVIDIA used to build its DGX A100-based AI supercomputing cluster.

DGXpert Program, DGX-Ready Software
NVIDIA also launched the NVIDIA DGXpert program, which brings together DGX customers with the company’s AI experts; and the NVIDIA DGX-Ready Software program, which helps customers take advantage of certified, enterprise-grade software for AI workflows.

DGXperts are AI-fluent specialists who can help guide clients on AI deployments, from planning to implementation to ongoing optimization. These individuals can help DGX A100 customers build and maintain state-of-the-art AI infrastructure.

The NVIDIA DGX-Ready Software program helps customers quickly identify and take advantage of NVIDIA-tested third-party MLOps software that can help them increase data science productivity, accelerate AI workflows and improve accessibility and utilization of AI infrastructure. The first program partners certified by NVIDIA are Allegro AI, cnvrg.io, Core Scientific, Domino Data Lab, Iguazio and Paperspace.

DGX A100 Technical Specifications

  • Eight NVIDIA A100 Tensor Core GPUs, delivering 5 petaflops of AI power, with 320GB in total GPU memory with 12.4TB per second in bandwidth.
  • Six NVIDIA NVSwitch™ interconnect fabrics with third-generation NVIDIA NVLink® technology for 4.8TB per second of bi-directional bandwidth.
  • Nine Mellanox ConnectX-6 HDR 200Gb per second network interfaces, offering a total of 3.6Tb per second of bi-directional bandwidth.
  • Mellanox In-Network Computing and network acceleration engines such as RDMA, GPUDirect® and Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ to enable the highest performance and scalability.
  • 15TB Gen4 NVMe internal storage, which is 2x faster than Gen3 NVMe SSDs.
  • NVIDIA DGX software stack, which includes optimized software for AI and data science workloads, delivering maximized performance and enabling enterprises to achieve a faster return on their investment in AI infrastructure.

A single rack of five DGX A100 systems replaces a data center of AI training and inference infrastructure, with 1/20th the power consumed, 1/25th the space and 1/10th the cost.


1 | 2  Next Page »

Review Article Be the first to review this article
Featured Video
More Editorial  
Latest Blog Posts
Bob Smith, Executive DirectorBridging the Frontier
by Bob Smith, Executive Director
ESD Alliance/Accellera Panel Takes Executive View on Returning to the Office
Anupam BakshiAgnisys Automation Review
by Anupam Bakshi
Automating Your Documentation Flow
Vincent ThibautArteris IP Blog
by Vincent Thibaut
Arteris IP Extends IP-XACT to UVM Testbenches
Design Verification Engineer II for Synopsys, Inc. at Mountain View, California
Business Operations Planner for Global Foundaries at Santa Clara, California
Test and Measurement System Architect for Xilinx at San Jose, California
NAND Hardware Engineer for Apple Inc at Cupertino, California
Logic Design Engineer for Intel at Santa Clara, California
Senior Staff Field Application Engineer for Global Foundaries at Santa Clara, California
Upcoming Events
DesignCon 2021 at San Jose McEnery Convention Center San Jose, CA San Jose CA - Aug 16 - 18, 2021
SEMICON Southeast Asia 2021 Hybrid Event at Setia SPICE Convention Centre Penang Malaysia - Aug 23 - 27, 2021
SEMI Europe Summit at Online, Central European Time Germany Germany - Sep 1 - 3, 2021
7th International Conference on Sensors & Electronic Instrumentation Advances (SEIA' 2021) at Palma de Mallorca, Mallorca balearic islands) Spain - Sep 14 - 16, 2021

© 2021 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise