Optimizing VR renderers with OVR_multiview

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

We’ve mentioned in a recent blog post how maintaining presence is key in virtual reality systems. Rendering applications at high framerates (60, 90 or 120 Hz depending on the Head Mounted Display’s maximum refresh rate) with low motion-to-photon latency is an important part of achieving it.

In this article, I’ll explain how the OVR_multiview extension can be used to reduce the CPU and GPU overhead of rendering a VR application.

Rendering without OVR_multiview

OVR_multiview - Wide FBO, 2 viewpoints - Barrel distortion

In a standard well-optimized VR application, the scene will be rendered to an Framebuffer Object (FBO) twice – once for the left eye, once for the right. To issue the renders, an application will do the following:

  • Bind the FBO
  • Left eye
    • Set viewport to the left-half of the FBO
    • Draw all objects in the scene using the left eye camera projection matrix
  • Right eye
    • Set viewport to the right-half of the FBO
    • Draw all objects in the scene using the right eye camera projection matrix

Once the scene is rendered for each eye, the FBO contents are barrel distorted to correct pincushion distortion introduced by the HMD lenses.

Optimising OpenGL ES for mobile VR - lens distortion

In this solution the application has to submit two almost identical streams of GL calls, even though the only difference between the renders are the matrix transformations applied to vertices. This wastes application time submitting calls per-eye. It also wastes GPU driver time validating API calls and generating a GPU command buffer per-eye when a single shared command buffer would do.


With the OVR_multiview extension (and the layered OVR_multiview2 and OVR_multiview_multisampled_render_to_texture extensions), an application can bind a texture array to an FBO and instance draws to each element. This enables graphics drivers to prepare a single GPU command buffer and reuse it for each instanced render. When the extension is active, the gl_ViewID_OVR built-in can be accessed in vertex shaders to identify the element the draw will be rendered to.

As a tile-based GPU architecture, tiling must be performed per-instance. Once the tiling process completes, per-element pixel render tasks are kicked.

OVR_multiview: Optimizing draw submission

OVR_multiview - multidraw texture array - Barrel distortion

A simple use case for OVR_multiview is to create a texture array consisting of two elements that represent the left and right eye images. Each frame, an application can render the elements by performing the steps below:

  • Bind the FBO (texture array attached)
  • Pass an array of transformation matrices to shaders as a uniform
    • Array consists of two elements – one transformation for the left eye, one for the right
  • Draw all objects in the scene
  • During vertex shader execution, use gl_ViewID_OVR to determine which matrix should be used for transformations

With this simple change, an application can halve the number of OpenGL calls submitted to the driver!

OVR_multiview: Reducing fragment processing

Lenses that increase a user’s field of view are an essential part of an immersive VR system. To counter the pincushion distortion introduced by the lenses, barrel distortion must be applied before the image is displayed.

Unfortunately, modern GPUs are not designed to natively render barrel distorted images. VR applications must render a non-distorted image in a first pass and then barrel distort it in a second pass. This wastes GPU cycles and bandwidth colouring texels in the first pass that will make a minimal contribution to the outer regions of the barrel where the texel to pixel density is high in the second pass.

OVR_multiview - texture array - Barrel distortion

As shown in the diagram above, OVR_multiview can be used to sub-divide the render into regions that better represent the pixel density of the barrel area they occupy. A simple implementation of this method would (per-eye) render a high-resolution, narrow field-of-view image for the centre of the barrel and a lower-resolution, wide field-of-view image for the outer regions of the barrel. During the barrel distortion pass, a fragment shader can be used to mix the high-resolution and low-resolution images based on the pixel coordinate within the barrel.

In a render where the narrow field of view, full-resolution image accounts for 25% of the scene and the wide field-of-view render is half-resolution (25% of the full resolution), the GPU will only need to colour half as many pixels as a full-resolution render – a huge reduction in fragment shader calculations and associated bandwidth. Of course, the savings made will depend on how small you can make the narrow field-of-view without introducing artefacts.


With the OVR_multiview extensions and a few simple application changes, VR applications can submit work to graphics drivers much more efficiently and reduce GPU overhead by rendering fewer pixels. If you want to know more about the work Imagination is doing to optimize VR rendering, I’d highly recommend reading Christian Pötzsch’s excellent blog post on reducing the latency of asynchronous time warping with strip rendering.

Joe Davis

Joe Davis

Joe Davis leads the PowerVR Graphics developer support team. He and his team support a wide variety of graphics developers including those writing games, middleware, UIs, navigation systems, operating systems and web browsers. Joe regularly attends and presents at developer conferences to help graphics developers get the most out of PowerVR GPUs. You can follow him on Twitter @joedavisdev.

1 thought on “Optimizing VR renderers with OVR_multiview”

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

[email protected]
Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

Beaglebone Black

Fun with PowerVR and the BeagleBone Black: Low-Cost Development Made Easy

Development boards are cool and the BeagleBone® Black (BBB) is one of the more interesting ones around. This widely available tiny board costs around £35 and will boot Linux is only 10 seconds so anyone interested in development can get stuck in quickly. The Introduction to Mobile Graphics course has been recently revamped for 2020 for the Imagination’s University Programme and the widely available, low-cost BBB is an ideal platform for student teaching and exercises based on OpenGL® ES2.0, instead of an expensive standard PC.

Read More »
Apple M1 image

Why you should be running your iOS apps on your new Apple M1 laptop

Towards the end of last year, Apple released the latest version of its Apple MacBook Pro and Macbook Air laptops. This release was notable as with these brand-new laptops Apple made a significant change – the processor inside was based on its own M1 chip rather than the Intel architecture that it had been using since 2006. Since its release, the Apple M1 has been widely hailed for its performance, outstripping Intel in all the major benchmarks and all in a cool, quiet package with low power consumption.

Read More »
android background

The Android Invasion: Imagination GPU IP buddies up with Google-powered devices

Google Android continues to have the lion share of the mobile market, powering around 75% of all smartphones and tablets, making it the most used operating system in the world. Imagination’s PowerVR architecture-based IP and the Android OS are bedfellows, with a host of devices based on Android coming to market all the time. Here we list a few that have appeared in Q4 2020.

Read More »


Sign up to receive the latest news and product updates from Imagination straight to your inbox.