##Java to open first accordion##

WHAT IS HYPERLANE GPU VIRTUALISATION?


As industries, from automotive to cloud gaming, increasingly demand scalable, secure, and high-performance GPU solutions, hardware-based virtualisation has emerged as a key requirement. At the core of Imagination’s GPU virtualisation offering is a critical innovation: HyperLane.

Below, we’ll explore how HyperLane works, what sets it apart, and why it underpins our virtualisation strategy across PowerVR and data centre-class GPUs.

WHAT IS HYPERLANE?

HyperLane is a hardware-software interface built into Imagination GPUs, enabling full hardware-level GPU virtualisation. It allows up to sixteen virtual machines (VMs) to access the GPU directly, each through a dedicated channel, without requiring runtime hypervisor involvement. This approach improves performance, simplifies software stacks, and enhances system reliability.

WHY VIRTUALISE A GPU?

Legacy virtualisation techniques, such as device emulation or paravirtualisation, rely heavily on the hypervisor to manage GPU tasks. This introduces:

  • Context switching and hypercall overhead
  • Increased CPU demand
  • Added software complexity and latency

HyperLane resolves these challenges by embedding the virtualisation control path directly into the GPU’s silicon.

HYPERLANE: UNDER THE HOOD

Each HyperLane instance acts as a fully isolated channel for a VM to submit tasks and receive results. The architecture includes:

  • Memory-Mapped Register Bank: Acts as the communication portal between VM and GPU, featuring doorbell mechanisms and status registers.
  • Dedicated Interrupt Line (GPU IRQ): Enables low-latency notification of task completions, bypassing the hypervisor.
  • OSID Identifier: Tags every memory transaction with its VM origin, supports IOMMU/SMMU access control, and provides abstraction for physical memory, key for security and virtualisation integrity.

HOW HYPERLANE WORKS AT RUNTIME

Once provisioned by the hypervisor, each VM communicates directly with its assigned HyperLane. The GPU firmware processor receives and schedules workloads according to:

  • Quality of Service (QoS) priorities
  • Security rules
  • Isolation guarantees

HyperLane supports Type 1 (bare-metal) and Type 2 (hosted) hypervisors and is fully agnostic to CPU virtualisation method or system bus architecture.

THE BENEFITS OF HYPERLANE

FULL ISOLATION

Each VM operates in its own secure memory space, enforced via OSID and IOMMU. This makes HyperLane ideal for mixed-criticality systems such as automotive ECUs.

MAXIMUM PERFORMANCE

With direct GPU access and zero hypervisor runtime interference, HyperLane delivers low-latency, high-throughput performance ideal for real-time graphics and compute workloads.

ENHANCED SECURITY

HyperLane protects against malicious interference with driver isolation, DoS mitigation, and controlled context switching.

FLEXIBLE SYSTEM INTEGRATION

The technology is designed to scale across architectures, from SoCs to PCIe GPUs, and supports RTOS, TEEs, and safety-critical workloads.

REAL-WORLD APPLICATIONS OF HYPERLANE

Automotive: Consolidate ADAS, infotainment, and clusters on a single GPU while maintaining fault domain separation. Learn more

Smart TVs and Set-Top Boxes: Isolate DRM and AI workloads securely. Learn more

Cloud Gaming: Share a single GPU across multiple players with dynamic resource scheduling. Learn more

WHY HYPERLANE MATTERS

HyperLane isn’t just a neat feature, it’s the foundation of modern GPU virtualisation. By moving virtualisation into hardware, we’ve created a platform that meets the security, performance, and scalability needs of embedded and cloud systems alike.

Download our white paper on HyperLane Virtualisation to explore the architecture in depth and see how Imagination is powering the next generation of intelligent graphics systems.