New NVIDIA Data Center Inference Platform to Fuel Next Wave of AI-Powered Services

TOKYO, Sept. 12, 2018 (GLOBE NEWSWIRE) -- GTC Japan—Fueling the growth of AI services worldwide, NVIDIA today launched an AI data center platform that delivers the industry’s most advanced inference acceleration for voice, video, image and recommendation services.

NVIDIA Tesla T4 GPU accelerator
Based on the new Turing GPU architecture, the NVIDIA Tesla T4 provides breakthrough inferencing performance with flexible, multi-precision capabilities.

The NVIDIA TensorRT™ Hyperscale Inference Platform features NVIDIA® Tesla® T4 GPUs based on the company’s breakthrough NVIDIA Turing™ architecture and a comprehensive set of new inference software.

Delivering the fastest performance with lower latency for end-to-end applications, the platform enables hyperscale data centers to offer new services, such as enhanced natural language interactions and direct answers to search queries rather than a list of possible results.

“Our customers are racing toward a future where every product and service will be touched and improved by AI,” said Ian Buck, vice president and general manager of Accelerated Business at NVIDIA. “The NVIDIA TensorRT Hyperscale Platform has been built to bring this to reality — faster and more efficiently than had been previously thought possible.”

Every day, massive data centers process billions of voice queries, translations, images, videos, recommendations and social media interactions. Each of these applications requires a different type of neural network residing on the server where the processing takes place.

To optimize the data center for maximum throughput and server utilization, the NVIDIA TensorRT Hyperscale Platform includes both real-time inference software and Tesla T4 GPUs, which process queries up to 40x faster than CPUs alone.

NVIDIA estimates that the AI inference industry is poised to grow in the next five years into a $20 billion market.

Industry’s Most Advanced AI Inference Platform
The NVIDIA TensorRT Hyperscale Platform includes a comprehensive set of hardware and software offerings optimized for powerful, highly efficient inference. Key elements include:

  • NVIDIA Tesla T4 GPU – Featuring 320 Turing Tensor Cores and 2,560 CUDA® cores, this new GPU provides breakthrough performance with flexible, multi-precision capabilities, from FP32 to FP16 to INT8, as well as INT4. Packaged in an energy-efficient, 75-watt, small PCIe form factor that easily fits into most servers, it offers 65 teraflops of peak performance for FP16, 130 teraflops for INT8 and 260 teraflops for INT4.
  • NVIDIA TensorRT 5 – An inference optimizer and runtime engine, NVIDIA TensorRT 5 supports Turing Tensor Cores and expands the set of neural network optimizations for multi-precision workloads.
  • NVIDIA TensorRT inference server – This containerized microservice software enables applications to use AI models in data center production. Freely available from the NVIDIA GPU Cloud container registry, it maximizes data center throughput and GPU utilization, supports all popular AI models and frameworks, and integrates with Kubernetes and Docker.

Supported by Technology Leaders Worldwide
Support for NVIDIA’s new inference platform comes from leading consumer and business technology companies around the world.

“We are working hard at Microsoft to deliver the most innovative AI-powered services to our customers,” said Jordi Ribas, corporate vice president for Bing and AI Products at Microsoft. “Using NVIDIA GPUs in real-time inference workloads has improved Bing’s advanced search offerings, enabling us to reduce object detection latency for images. We look forward to working with NVIDIA’s next-generation inference hardware and software to expand the way people benefit from AI products and services.”

Chris Kleban, product manager at Google Cloud, said: “AI is becoming increasingly pervasive, and inference is a critical capability customers need to successfully deploy their AI models, so we’re excited to support NVIDIA’s Turing Tesla T4 GPUs on Google Cloud Platform soon.”

More information, including details on how to request early access to T4 GPUs on Google Cloud Platform, is available here.

Additional companies, including all major server manufacturers, voicing support for the NVIDIA TensorRT Hyperscale Platform include:

“Cisco’s UCS portfolio delivers policy-driven, GPU-accelerated systems and solutions to power every phase of the AI lifecycle. With the NVIDIA Tesla T4 GPU based on the NVIDIA Turing architecture, Cisco customers will have access to the most efficient accelerator for AI inference workloads — gaining insights faster and accelerating time to action.”
— Kaustubh Das, vice president of product management, Data Center Group, Cisco

“Dell EMC is focused on helping customers transform their IT while benefiting from advancements such as artificial intelligence. As the world’s leading provider of server systems, Dell EMC continues to enhance the PowerEdge server portfolio to help our customers ultimately achieve their goals. Our close collaboration with NVIDIA and historical adoption of the latest GPU accelerators available from their Tesla portfolio play a vital role in helping our customers stay ahead of the curve in AI training and inference.”
— Ravi Pendekanti, senior vice president of product management and marketing, Servers & Infrastructure Systems, Dell EMC

“Fujitsu plans to incorporate NVIDIA’s Tesla T4 GPUs into our global Fujitsu Server PRIMERGY systems lineup. Leveraging this latest, high-efficiency GPU accelerator from NVIDIA, we will provide our customers around the world with servers highly optimized for their growing AI needs.”
— Hideaki Maeda, vice president of the Products Division, Data Center Platform Business Unit, Fujitsu Ltd.

“At HPE, we are committed to driving intelligence at the edge for faster insight and improved experiences. With the NVIDIA Tesla T4 GPU, based on the NVIDIA Turing architecture, we are continuing to modernize and accelerate the data center to enable inference at the edge.”
— Bill Mannel, vice president and general manager, HPC and AI Group, Hewlett Packard Enterprise

“IBM Cognitive Systems is able to deliver 4x faster deep learning training times as a result of a co-optimized hardware and software on a simplified AI platform with PowerAI, our deep learning training and inference software, and IBM Power Systems AC922 accelerated servers. We have a history of partnership and innovation with NVIDIA, and together we co-developed the industry’s only CPU-to-GPU NVIDIA NVLink connection on IBM Power processors, and we are excited to explore the new NVIDIA T4 GPU accelerator to extend this state of the art leadership for inference workloads.”
— Steve Sibley, vice president of Power Systems Offering Management, IBM

“We are excited to see NVIDIA bring GPU inference to Kubernetes with the NVIDIA TensorRT inference server, and look forward to integrating it with Kubeflow to provide users with a simple, portable and scalable way to deploy AI inference across diverse infrastructures.”
— David Aronchick, co-founder and product manager of Kubeflow

“Open source cross-framework inference is vital to production deployments of machine learning models. We are excited to see how the NVIDIA TensorRT inference server, which brings a powerful solution for both GPU and CPU inference serving at scale, enables faster deployment of AI applications and improves infrastructure utilization.”

1 | 2  Next Page »

Review Article Be the first to review this article

 True Circuits: Ultra PLL

Featured Video
Latest Blog Posts
Bob Smith, Executive DirectorBridging the Frontier
by Bob Smith, Executive Director
Virtual 2020 CEO Outlook Set for June 17
Colin WallsEmbedded Software
by Colin Walls
Multiple constructors in C++
Senior Application Engineer Formal Verification for EDA Careers at San Jose and Austin, California
Senior Analog Design Engineers #5337 for EDA Careers at EAST COAST, California
Software Engineer for EDA Careers at RTP, North Carolina
Senior Layout Engineer for EDA Careers at EAST COAST, California
Upcoming Events
Sensors Expo & Conference at McEnery Convention Center 150 W. San Carlos Street SAN JOSE CA - Jun 22 - 24, 2020
Nanotech 2020 Conference and Expo at National Harbor MD - Jun 29 - 1, 2020
IEEE Computer Society Annual Symposium on VLSI (ISVLSI) 2020 at Limassol Hotel, Amathus Area, Pareklisia Cyprus - Jul 6 - 8, 2020
57th Design Automation Conference 2020 at San Francisco CA - Jul 19 - 23, 2020

© 2020 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise