White Paper

TENSOR TILING

Our white paper, Imagination Tensor Tiling, will help you learn more about this critical bandwidth-saving technology in-depth, and provide insight into how it provides real-world benefits in our IMG Series4 NNA with key neural network models.

  • English
  • 简体中文
  • 繁體中文

What is Tensor Tiling?

In an electric car, every joule counts and one of the major contributors to power consumption is accessing data from external memory. That’s why designers of the AI chips that will enable the advanced driver assistance systems (ADAS) and autonomous driving platforms of the future care about how much bandwidth in a system is consumed.

In these systems, the processing for the various ADAS and autonomous functions we will all come to reply on, such as speech recognition and multi-camera/sensor object detection and tracking, will require high-performance neural network inferencing, which typically places huge pressure on memory bandwidth inside SoCs. As bandwidth inside system-on-chips (SoCs) is precious, every bit saved reduces power consumption and helps extend the range of the car.

This is why Imagination Tensor Tiling technology is critical. Tensors are large, multi-dimensional arrays of data elements that constitute the key structures used in neural networks. Traditionally, these require frequent, repeated movement to and from main memory, which consumes significant amounts of bandwidth and power.

Imagination Tensor Tiling technology, intelligently “tiles” tensors together into groups, enabling them to be processed much more efficiently, which, in combination with the on-chip memory in the IMG Series4 neural network accelerator (NNA), provides a significant reduction in power consumption, as well as providing silicon area savings to reduce costs.

Download White Paper

什么是 Tensor Tiling?

在电动汽车中,每一个焦耳都很重要 – 从外部记忆体存取资料就是主要功耗之一。这就是为什么未来将启用高级驾驶员辅助系统(ADAS)和自动驾驶平台的AI芯片的设计者关心系统中消耗了多少带宽的原因。

在这些系统中,我们将要依赖的各种ADAS和自主功能的处理(例如语音识别以及多摄像机/传感器对象检测和跟踪)将需要高性能的神经网络推理,这通常会为SoC内部的内存带宽带来巨大的压力。由于片上系统(SoC)内部的带宽非常宝贵,因此节省的每一位元都可以降低功耗,并有助于扩展汽车的续航里程。

这就是为什么Imagination Tensor Tiling技术至关重要的原因。张量很大的多维阵列资料元是神经网路中使用的主要关键结构。传统上,这些操作需要频繁重复地存取主记忆体,这会消耗大量带宽和功率。

Imagination Tensor Tiling技术,可有效地把张量分为多个区块(tile)并分组处理,,使它们可以更高效地进行处理,结合IMG Series4神经网络加速器(NNA)中的片上记忆体,可显著降低功耗,以及节省芯片面积以降低成本。

下载白皮书

什麼是 Tensor Tiling?

在電動汽車中,每一個焦耳都很重要 – 從外部記憶體存取資料就是主要功耗之一。這就是為什麼未來將啟用先進駕駛輔助系統(ADAS)和自動駕駛平台的AI晶片的設計者關心系統中消耗了多少頻寬的原因。

在這些系統中,我們將要依賴的各種ADAS和自主功能的處理(例如語音辨識以及多攝影機/感測器的物體辨識和追蹤)將需要高效能的神經網路推理,這通常會為SoC內部的記憶體頻寬帶來巨大的壓力。由於片上系統(SoC)內部的頻寬非常寶貴,因此節省的每一位元都可以降低功耗,並有助於擴展汽車的續航里程。

這就是為什麼Imagination Tensor Tiling技術至關重要的原因。張量很大的多維陣列資料元是神經網路中使用的主要關鍵結構。傳統上,這些操作需要頻繁重複地存取主記憶體,這會消耗大量頻寬和功率。 

Imagination Tensor Tiling技術,可有效地把張量分為多個區塊(tile)並分組處理,使它們可以更高效地進行處理,結合IMG Series4神經網路加速器(NNA)中的晶載記憶體,可顯著降低功耗,及節省晶片面積以降低成本。

下載白皮書