AI & COMPTE

IP THAT POWERS INTELLIGENCE

Intelligence Embedded in Every Device

Our Edge AI & Compute IP delivers scalable on‑device intelligence, from wearables to cloud systems. With GPU-based parallelism and dedicated neural cores, it handles inference, vision, and compute tasks efficiently, all within a compact, low-power platform.

ADDING FLEXIBILITY INTO INTELLIGENT SYSTEMS

AI is transforming everything. From smartphones and smart TVs to industrial automation and robotics. Our GPU IP provides the performance, flexibility and power efficiency needed to accelerate AI workloads on even the most power-constrained edge devices.

Built on a highly parallel architecture, our GPUs can handle AI inference and graphics tasks simultaneously, making them ideal for multifunctional SoCs. With an open, developer-friendly compute software stack, Imagination GPUs offer seamless programmability and future-proof performance for next-generation AI applications.

Explore our GPU IP tailored for AI

OUR LATEST AI INNOVATIONS

This diagram shows how data moves the shorter pipeline depth of our ALUs and how data avoids unnecessary reads and writes to the register store.

BURST PROCESSORS

E-Series new Burst Processors technology works with Imagination’s graphics and AI pipelines to lower average GPU power consumption by 35% compared to D-Series. The technology reduces data movement within the GPU, which in turn lowers power consumption and boosts GPU utilisation, as it frees up memory to keep the compute units fed.

NEURAL CORES

E-Series introduces lots of dense, deeply integrated acceleration for power-efficient AI operations – up to 4x more than Imagination D-Series GPU IP. This transforms the classic Imagination GPU Core into a highly capable Neural Core that scales in performance up to 200 TOPS (INT8/FP8) and transforms the face of edge intelligence.
This diagram shows the range of AI and Compute software tools available and how they can be used together to boost GPU utilisation.

OpenCL COMPUTE LIBRARIES

Imagination’s latest GPUs ship with a set of OpenCL compute libraries (imgBLAS, imgNN, imgFFT) that help software developers achieve up to 80% GPU utilisation for common compute workloads. They work alongside reference toolkits that help developers port their code to Imagination hardware via oneAPI or TVM TensorGraph.

EXPANDED NUMBER FORMAT SUPPORT

Imagination GPUs support industry standard number formats (FP32 / FP16 / BF16 / INT8 / FP8 / MXFP8 / FP4 / MXFP4) and comply with Vulkan and OpenCL extensions. Data compression solutions can be used to support the efficient movement of data on the chip.

WHY IMAGINATION FOR AI?

FUTURE-PROOF PROGRAMMABILITY

As AI models grow in complexity, programmability becomes essential. Our GPU IP offers a flexible, future-ready platform backed by a robust software ecosystem, enabling developers to deploy next-generation AI algorithms with confidence.

Our comprehensive software stack supports easy code migration, performance tuning, and deployment on Imagination-based hardware. With highly optimised OpenCL compute libraries, advanced compilers, and intuitive performance analysis tools, developers can maximise efficiency and adapt quickly as AI workloads evolve.

FLEXIBLE GRAPHICS & COMPUTE PROCESSING

As the most advanced parallel processing architecture for power-constrained devices, Imagination GPUs can be deployed to accelerate either graphics or AI tasks – or both at the same time! Technologies that help make Imagination GPUs a valuable silicon investment include:

  • Asynchronous compute
  • Hardware-based virtualisation with QoS & prioritisation
  • Primary-Primary multi-core functionality
  • EFFICIENT BY DESIGN

    The PowerVR architecture is the foundation of all Imagination GPUs and is the gold standard of power-efficient graphics and compute processing for edge devices, from wearables to laptops.

    Its tile-based approach to computing keeps as much data local to the GPU as possible and is just as applicable to compute workloads as to graphics.

    IMAGINATION GPUs FOR AI

    Imagination’s E-Series GPU IP overcomes the performance, memory, and power limits of edge AI. Designed for next-gen applications across mobile, automotive, desktop, and consumer markets, E-Series combines high performance with the flexibility and programmability of GPU acceleration—enabling more adaptable and future-proof edge system designs.

    Explore E-Series

    Imagination’s D-Series takes GPU density, efficiency and performance to a new level and adds innovative extras that help our partners succeed in competitive markets – whether that’s enabling automotive chip designers to achieve ASIL-B functional safety without overhead, or halving the area penalty of mobile ray tracing.

    Explore D-Series

    Imagination’s efficient GPUs improve user experiences across a wide range of consumer devices. IMG CXM is Imagination’s latest, highly efficient GPU, available in a variety of configurations to ensure the best fit for your project.

    Explore IMG CXM GPU

    B-Series covers all of Imagination’s markets, scaling from low-area configurations for set-top boxes right the way through to high-performance solutions for desktop. B-Series introduces Imagination’s innovative multi-core technology for boosting performance or adding extra multitasking flexibility to the GPU.

    Explore B-Series

    IMG A-Series contains Imagination’s smallest GPU IP and is perfect for industrial settings, consumer devices and entry-level mobile. Optimised for power efficiency, A-Series delivers consistent, sustainable frame rates without clock throttling.

    Explore A-Series

    FREQUENTLY ASKED QUESTIONS

    Edge AI refers to artificial intelligence processing that happens directly on a device or local system, rather than relying on cloud servers. This allows for faster response times, reduced data transfer, and improved privacy, ideal for applications like automotive safety systems, smart cameras, and mobile devices.

    Imagination’s Edge AI GPU IP delivers efficient, scalable AI compute capabilities optimised for power-constrained environments.

    As devices like cars, smartphones, and smart home systems become more intelligent, they need to process AI workloads locally for speed, security, and reliability. Edge AI helps minimise latency, lowers bandwidth use, and ensures critical functions work even without internet connectivity.

    Our scalable AI and GPU IP enables high-performance edge computing with industry-leading power efficiency.

    While dedicated AI accelerators focus purely on AI tasks, using a GPU for AI provides more flexibility. Imagination’s GPU IP supports both graphics rendering and AI compute, allowing a single hardware block to handle multiple workloads. This reduces chip complexity and cost while still delivering excellent AI performance, especially useful for multi-functional devices like smartphones and in-car systems.

    Yes! Imagination’s GPU and AI compute IP is designed to be compatible with a wide range of processor architectures, including RISC-V. This flexibility allows SoC designers to pair Imagination’s scalable AI IP with modern open-standard CPU architectures for highly customisable edge AI solutions.