IMG SERIES4 NNA
BEYOND NEXT-GEN AI
The edge platform of choice for assisted driving and full self-driving cars.
The Automotive Revolution
We are on the cusp of an automotive revolution. The demand for advanced driver assistance systems (ADAS) is set to triple by 2027 and still the industry is already looking to move beyond this to full self-driving cars.
Current technology deployed for testing and developing autonomous driving are physically large and very power-hungry, yet they are also underpowered in terms of performance. For large-scale commercial deployment what’s needed is an “edge” solution that offers:
Enter the IMG Series4 NNA
IMG Series4 is a ground-breaking neural network accelerator (NNA) for the automotive industry to enable ADAS and autonomous driving. With its incredible high performance at ultra-low latency, architectural efficiency and safety features, it has what is needed for large-scale commercial implementation.
Imagination is working with leading players and innovators in the automotive industry and the multi-core IP has already been licensed. It’s available to the market in December 2020 so contact us now if you don’t want to get left behind.
Key to the incredible performance of the IMG Series4 is its multi-core capability. Available in configurations of 2, 4, 6, or 8 cores per cluster, multi-core allows for flexible allocation and synchronisation of workloads across the cores. In combination with Imagination’s software, which provides fine-grained control, multiple workloads can now be executed across any number of cores in the cluster.
Low latency enhances response time, and, on the road, this can be critical to saving lives. By combining the cores in an 8-core cluster, they can all be dedicated to executing a single task, reducing latency, and therefore response time, by a factor of eight.
Maintaining high levels of utilisation with tera operations per second (TOPS) is a key metric for determining neural network performance. Thanks to its multi-core scalability, IMG Series4 outperforms other solutions on the market by an order of magnitude, with industry-leading performance metrics.
A single IMG Series4 core can deliver up to 12.5 TOPS per core at 1.2GHz in 7nm, and can be arranged in a cluster of 2, 4, 6, or 8 cores, which can be laid down in multiple clusters. An 8-cluster core is 100 TOPS and six of these laid out on a single SoC would therefore deliver 600 TOPS.
Depending on configuration, the IMG Series4 is over 100x faster than using a GPU for AI acceleration and 1000x faster than using a CPU.
Power and Bandwidth Efficiency
Power efficiency is at the heart of all Imagination designs. The low silicon area IMG Series4 delivers its incredible performance of 12.5 TOPS per core at less than a watt – by far the best in the industry.
IMG Series4 also offers phenomenal bandwidth efficiency. Processing tensors, the complex 3D maths performed inside a neural network, requires going out to memory and back across the bus. This takes time, consumes memory bandwidth, and absorbs power.
New for IMG Series4 is Imagination Tensor Tiling (ITT) patent-pending technology that solves this problem. It efficiently packages up tensors into blocks, which are then processed in local on-chip memory. This greatly minimises data transfers between layers of the network, reducing bandwidth by up to an incredible 90%.
Leading Safety Mechanisms
IMG Series4 includes IP-level safety features and is built using a design process that help customers to achieve certification to ISO 26262, the industry safety standard that addresses risk in automotive electronics. Via hardware safety mechanisms that protect the compiled network, the execution of the network and the data-processing pipeline, IMG Series4 enables functionally-safe neural networks inferencing, without impacting performance.
IMG Series4 NNA Core Families
Imagination’s Three-Pronged Automotive Solution
Automotive requires heterogeneous compute solutions: CPU, GPU, NNA and networking. The IMG Series4 NNA thus forms part of Imagination’s three-pronged approach to automotive solutions.
Low-power multi-core GPUs with unprecedented performance and the industry’s first to be built using ISO 26262 certifiable processes.
Performance-ceiling busting low-power, low-area neural network accelerators that brings the automotive industry the performance it needs to enable true self-driving.
IMG SErIeS4 nNA
Download the IMG Series4 NNA overview.
IMG Series4 is a ground-breaking neural network accelerator for the automotive industry to enable ADAS and autonomous driving. Find out why IMG Series4 fulfils the remit needed for large scale commercial implementation and why it is becoming the industry-standard platform of choice for the deployment of advanced driver assistance and self-driving cars.
Download Product Overview
IMG Series4 NNA
Join this webinar to learn about the latest IMG Series4 NNA (neural network accelerator) range of multi-core IP for Artificial Intelligence (AI).
IMG Series4 is a range of multi-cores with a new architecture and advanced features that are ideal for Automotive needs for autonomous vehicles and ADAS.
Imagination’s neural network accelerator and Visidon’s denoising algorithm prove to be perfect partners
This blog post is a result of a collaboration between Visidon, headquartered in Finland and Imagination, based in the UK. Visidon is recognised as an expert in algorithms for camera image enhancement and analysis and Imagination has a series of world-beating neural network accelerators (NNA) with performance up to 100 TOPS per second per core. The problem tackled in this blog post is denoising images from conventional colour cameras. The solution is in two parts: 1. Algorithms that remove the noise without damaging image detail. 2. A high-performance convolution engine capable of running a trained neural network that takes a colour image as input and outputs a denoised colour image.
Imagination has been supplying its GPU IP into the automotive sector for over 15 years and has the number one spot for market share for GPUs for application processors. As such is has silicon-proven experience assisting automotive OEMs to build SoCs that conform to the requirement for ASIL B or higher, while also providing the long-term support that is demanded by the industry. This last point is critical. We continue to offer support to solutions that were introduced into the market 10 years ago, as these are still in cars on the road today, and customers can rest assured we will be there in long into the future to support our current IP as well.
This article presents a flexible and general optimisation scheme for converting floating-point networks using Variable Bit Depth (VBD) compression as a step towards efficient, low-power inference.
Sharpen Your Competitive Edge
Get in touch and sharpen your competitive edge with IMG Series4 NNA.