Neural Network Inference Engine IP Core Delivers >10 TeraOPS per Watt

VeriSilicon Expands Leadership in Deep Neural Network Processing with Breakthrough NN Compression Technology VIP8000 NN Processor Scaling from 0.5 to 72 TeraOPS


• Scalable from IoT edge always-on to server ASICs with performance from 0.5 to 72 TeraOPS
• Delivers more than 10 TeraOPS per Watt in 14nm Process Technology
• Fully programmable processor supports OpenCL, OpenVX, and a wide range of NN frameworks (TensorFlow, Caffe,    AndroidNN, ONNX, NNEF, etc.)
• Native acceleration for i8, i16, fp16 and fp32 inference, supporting a broad spectrum of NN network topologies at variable precisions
• Dramatic reductions in memory bandwidth requirements with the introduction of Hierarchical Compression, Software  Tiling/Caching, Pruning, Fetch Skipping and Layer Merging technology
• 10 new VIP8000 IP licensees for the product added in 2017

Nuremberg, Germany – February 27, 2018 – VeriSilicon Holdings Co., Ltd. (VeriSilicon) today announced significant milestones have been achieved for its versatile and highly scalable neural network inference engine family VIP8000. 

“The biggest thing to happen in the computer industry since the PC is AI and machine learning, it will truly revolutionize, empower, and improve our lives. It can be done in giant machines from IBM and Google, and in tiny chips made with VeriSilicon’s neural network processors,” said Dr. Jon Peddie, president Jon Peddie Research. “By 2020 we will wonder how we ever lived without our AI assistants,” he added. 

Machine learning and neural network processing represent the next major market opportunity for embedded processors. The International Data Corporation (IDC) forecasts spending on AI and machine learning to grow from $8B in 2016 to $47B by 2020. With the release of the latest generation of their NN inference IP, VeriSilicon establishes itself as a significant driver of growth in this category. The industry-leading top-end performance of the Vivante VIP8000 processor continues to expand the application space from always-on battery powered IoT clients to AI server farm applications. 

VeriSilicon’s latest updates to VIP8000 are specifically designed to accelerate neural network model inferencing with greater efficiency and inference speed while slashing memory bandwidth requirements compared to alternative DSP, GPU, and CPU hybrid processor approaches. The fully programmable VIP8000 processors reach the performance and memory efficiency of dedicated fixed-function logic with the customizability and future proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks (TensorFlow, Caffe, AndroidNN, ONNX, NNEF, etc.). The VIP8000 NN architecture can handle a wide range of AI workloads, while optimizing memory management of the data that flows through the processor. 

Not only does VeriSilicon’s NN engine outperform all traditional DSP, GPU and CPU hybrid systems, it is industry-proven and has been shipping to licensees as a ready IP core for more than 18 months. In 2017 alone, 10 major ASIC developers selected VIP after rigorous benchmarking of both competing IP solutions and SoCs. VeriSilicon has been successful licensing to a wide range of end-customers with applications from ADAS and autonomous vehicles, security surveillance, home entertainment, imaging to dedicated ASICs for servers. 

The VIP8000 NN processor achieves the industry’s highest performance and energy efficiency levels and is the most scalable platform on the market. This NN engine can range from 0.5 to 72 TeraOPS, with power efficiency of more than 10 TeraOPS per Watt, based on a recent 14nm implementation of the IP. The introduction of new Hierarchical Compression, Software Tiling/Caching, Pruning, Fetch Skipping and patent pending, Layer Merging technology further reduces memory bandwidth requirements for VIP8000 relative to other processor architectures. 

“AI is everywhere. With patent-pending Neural Network compression technology, VIP8000 family efficiently delivers the performance that accelerates the adoption of AI in embedded products. We are deeply engaged with leading customers ranging from deeply embedded to edge server products.” said Weijin Dai, Chief Strategy Officer, Executive Vice President and GM of VeriSilicon’s Intellectual Property Division. “Applications and algorithms to address these challenges are rapidly advancing and we are combining AI technology with VeriSilicon’s extensive IP portfolio to deliver breakthrough solutions to our customers. AI needs to deliver value efficiently.” 

VeriSilicon supports a wide range of NN frameworks and networks (TensorFlow, Caffe, AndroidNN, Amazon Machine Learning, ONNX, NNEF, AlexNet, VGG16, GoogLeNet, Yolo, Faster R-CNN, MobileNet, SqueezeNet, ResNet, RNN, LSTM, etc.) and also provides numerous software and hardware solutions to enable developers to create high-performance Neural Network models and machine-learning-based applications.

VeriSilicon at Embedded World 2018

Learn more about the VIP8000 NN and related VeriSilicon IP, NN ecosystem solution development partners, custom silicon and advanced packaging (SiP) turnkey services at Embedded World 2018 in Nuremberg, Germany, February 27 – March 1,  Hall 4A / 4A-360

About VeriSilicon

VeriSilicon is a Silicon Platform as a Service (SiPaaS®) company that provides industry-leading, comprehensive System-on-a-Chip (SoC) and System-in-a-Package (SiP) solutions for a wide range of end markets including mobile internet devices, datacenters, the Internet of Things (IoT), automotive, industrial, and medical electronics. Our machine learning and artificial intelligence technologies are well positioned to address the movement to “intelligent” devices. SiPaaS provides our customers a substantial head start in the semiconductor design and development process and allows the customers to focus their efforts on core competency with differentiating features. Our end-to-end semiconductor turnkey services can take a design from concept to a completed, tested, and packaged semiconductor chip in record time. The breadth and flexibility of our SiPaaS solutions make them a performance-effective and cost-efficient choice for a variety of customers. 

More details, please contact:  Email Contact


Review Article Be the first to review this article
Featured Video
SerDes Applications Design Engineer for Xilinx at San Jose, California
Pre-silicon Design Verification Engineer for Intel at Santa Clara, California
Technical Product Manager- SISW-EDA 238452 for Siemens AG at Fremont, California
Application Product Engineer for Mentor Graphics at Wilsonville, Oregon
Principle Engineer (Analog-Mixed-Signal Implementation) for Global Foundaries at Santa Clara, California
Business Operations Planner for Global Foundaries at Santa Clara, California
Upcoming Events
Strategic Materials Conference (SMC) 2021 at DoubleTree by Hilton san jose CA - Sep 27 - 29, 2021
MIPI DevCon 2021 at Virtual Event - Sep 28 - 29, 2021
Embedded Systems Week at Virtual Conference Virtual Conference - Oct 10 - 15, 2021
DVCon Europe 2021 at Germany - Oct 26 - 27, 2021

© 2021 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise