IMG SERIES4

Beyond next-gen AI

The edge platform of choice for assisted driving and full self-driving cars

The automotive revolution

The IMG Series4 is a revolutionary neural network accelerator (NNA) for the automotive industry that enables ADAS and autonomous driving. It has what it takes for large-scale commercial implementation due to its incredible high performance at ultra-low latency, architectural efficiency, and safety features.
multi core flexibility icon

Multi-core scalability and flexibility

Available in configurations of 2, 4, 6, or 8 cores per cluster, enables flexible workload allocation and synchronisation across the cores. Multiple workloads can now be executed across any number of cores in the cluster when combined with Imagination’s software, which provides fine-grained control.
ultra low latency icon

Ultra-low latency

Low latency improves response time, which can be critical for saving lives on the road. When the cores in an 8-core cluster are combined, they can all be dedicated to executing a single task, reducing latency and thus response time by a factor of eight.
automotive safety icon

Leading safety mechanisms

The IMG Series4 incorporates IP-level safety features and is built using a design process that assists customers in achieving ISO 26262 certification, the industry safety standard that addresses risk in automotive electronics. IMG Series4 enables functionally-safe neural network inference without compromising performance.

Incredible performance

A key metric for determining neural network performance is maintaining high levels of utilisation with tera operations per second (TOPS). IMG Series4 outperforms other solutions on the market by an order of magnitude, with industry-leading performance metrics, thanks to its multi-core scalability.

A single IMG Series4 core in 7nm can deliver up to 12.5 TOPS per core at 1.2GHz and can be arranged in a cluster of 2, 4, 6, or 8 cores that can be laid down in multiple clusters. An 8-cluster core is 100 TOPS, and six of these on a single SoC would result in 600 TOPS.

Depending on configuration, the IMG Series4 is more than 100 times faster than a GPU for AI acceleration and 1000 times faster than a CPU.

100x

Faster than a GPU

1000x

Faster than CPU

Imagination and Visidon have partnered to power the transition to deep-learning-based super resolution for embedded applications across mobile, DTV and automotive markets

Find out more

“Power efficiency and high performance are two of the main pillars that Visidon builds its solutions on. We were surprised to see our deep-learning networks run so easily on Imagination’s IMG Series4 NNA and we were impressed with the overall power efficiency maintained while processing high compute workloads. We look forward to our partnership evolving as we unlock the future of AI-powered super resolution technology together.”

Visidon logo

Matt Niskanen

CTO

Power and bandwidth efficiency

All Imagination designs are focused on power efficiency. The IMG Series4’s low silicon area delivers incredible performance of 12.5 TOPS per core at less than a watt – industry leading performance.

IMG Series4 provides exceptional bandwidth efficiency. Tensor processing necessitates going out to memory and back. This consumes memory bandwidth and consumes power.

The patent-pending Imagination Tensor Tiling (ITT) technology, new for IMG Series4, solves this problem. Tensors are efficiently packaged into blocks, which are then processed in local on-chip memory. This significantly reduces data transfers between network layers, reducing bandwidth by up to 90%.

Download Tensor Tiling white paper

Advanced Signal Processing and FFT

We present novel methods for performing signal processing operations such as frequency domain transforms, spectrogram generation, and mel-frequency cepstral coefficients (MFCC) extraction using massively parallel operations enabled by IMG Series4 neural network accelerators (NNA).

Neural networks (NNAs) enable audio and signal processing tasks that would otherwise be performed elsewhere in the system to be performed on the same device as the neural network. This reduces overall bandwidth, power, and latency requirements for processing tasks such as image and audio processing.

Download Advanced Signal Processing White Paper

Imagination’s automotive solution

Automotive requires heterogeneous compute solutions: CPU, GPU, NNA and networking. The IMG Series4 NNA thus forms part of Imagination’s approach to automotive solutions.
GPU icon

GPU

Low-power multi-core GPUs with unprecedented performance and the industry’s first to be built using ISO 26262 certifiable processes.
Learn more
nna icon

NNA

Performance-ceiling busting low-power, low-area neural network accelerators that brings the automotive industry the performance it needs to enable true self-driving.
Learn more

IMG Series4 core families

The next-generation neural network accelerator (NNA) IMG Series4 is ideal for advanced driver-assistance systems (ADAS) and autonomous vehicles like robotaxis. The cores include advanced technical features such as Imagination Tensor Tiling, advanced safety mechanisms, and intelligent workload management.

IMG Series4 NNA webinar graphic

Webinar

Watch this webinar to learn about the latest IMG Series4 NNA range of multi-core IPs for Artificial Intelligence (AI).
Watch webinar

Download the IMG Series4 overview

IMG Series4 is a ground-breaking neural network accelerator for the automotive industry to enable ADAS and autonomous driving.

What is the IMG Series4 NNA?

IMG Series4 is a ground-breaking neural network accelerator for the automotive industry to enable ADAS and autonomous driving. Find out why IMG Series4 fulfils the remit needed for large scale commercial implementation and why it is becoming the industry-standard platform of choice for the deployment of advanced driver assistance and self-driving cars.

Download overview

IMG 4系列NNA是汽车行业具备突破性的神经网络加速器,可实现 ADAS 和自动驾驶。

下载产品概述