GPU Ray tracing AR VR

The role of the GPU in experiencing the Metaverse

Picture of Kristof Beets
Jan 10, 2022  |  4 min read

How we’re harnessing the power of GPUs for both virtual immersion and large-scale data processing.

“As Hiro approaches the street, he sees two young couples, probably using their parents’ computers for a double date in the Metaverse, climbing down out of Port Zero, which is the local port of entry and monorail stop. He is not seeing real people, of course. This is all a part of the moving illustration drawn by his computer according to specifications coming down the fiber-optic cable. The people are pieces of software called avatars. They are the audiovisual bodies that people use to communicate with each other in the Metaverse.”

–From Snow Crash, Neal Stephenson (Bantam, 1992)

A brand-new era

Although the concept, an evolution of cyberspace along more social lines, has been around since the early 1990s, the Metaverse has been having its spotlight moment, with companies such as Epic Games, Microsoft, and most recently Facebook (now Meta) announcing plans to create interconnected worlds spanning across both physical and virtual environments.

The names involved imply the scope of the Metaverse: work, games, social. But there will be room for others, from education to therapy to politics. Many of these things were also part of the Second Life experience of the early noughties, but this time, thanks to the use of augmented and virtual reality (AR/VR), the Metaverse promises to be a more immersive experience. It’s probably best to think of it as a virtual version of the world in which many of the things you can do in the real world might also be possible in the virtual.

Bringing this ambitious concept to life does come with its challenges. As interoperability between platforms, programs, and apps will be key for a fully functioning ecosystem, the software will need to evolve to facilitate that. We are likely to see some impressive software jumps over the coming years that will allow for massive amounts of data to be managed in new and exciting ways.

But despite the software being a key pillar for converging multiple realities, hardware will be critical to enable people to interact with this new online world. With full-body haptics and virtual reality rigs still, it’s very likely that during the transition to the Metaverse, AR and VR technologies will be the main means of experiencing it. This poses a significant opportunity for the silicon industry, especially from a GPU perspective; to drive massive innovation that will help immerse users, while leveraging parallelism to tackle data challenges.

The current state of the Metaverse

Technically we’re seeing smaller-scale ‘Metaverses’ already. A popular example would be Fortnite from Epic Games. Events like the 2019 Marshmello concert gathered over 10 million attendees for a purely social event. And many other events followed combining several IPs – such as the Rift Tour, featuring Ariana Grande, or Marvel-inspired Nexus Wars. Similarly in the gaming space, we’ve also seen Roblox launching its ‘Developer Marketplace’ which enables developers to monetize games, assets, plug-ins, 3D models and more – effectively creating ways to monetise work within the game itself.

But Metaverse is not just about gaming. Microsoft is one of the largest cloud vendors in the world and owns a wide range of work software and services that span across systems with hundreds of millions of users via Office 365 and LinkedIn. And with Facebook (Meta) having just announced its ambitious plans to make the Metaverse a reality, we’re starting to get a better picture of what the ecosystem might look like – a continuous stream of interconnected data of all kinds, travelling over platforms, systems, and applications while enabling users to truly exist in an online, virtual space.

The potential of AR and VR

When thinking about how people would interact with the Metaverse, XR hardware is the obvious answer. It’s easy to imagine throwing a VR headset on, connecting to a Ready Player One-type rig, and instantly being teleported to a virtual world of your choice.

However, VR headsets are still at the beginning of their journey and have relatively high barriers to entry due to the expensive technologies that go into them. Simply put, to achieve a fully immersive virtual environment, high-resolution displays with high-refresh rates are required. With clever innovations such as foveated rendering currently in development (as featured in the upcoming ), headsets will continue to evolve – with each iteration becoming more accessible and efficient.

In terms of early Metaverse interfaces, AR has shown significant growth – with games such as Pokémon Go effectively crowdsourcing 3D scans of the real world via its users to further develop the AR immersion. Niantic’s recent acquisitions of 6d.ai and Scaniverse, highlight the company’s plans to build AR hardware and software that can fully merge real and digital environments.

Microsoft, Apple, Facebook, and Google are also developing AR products that will likely rely on continuous physical environmental scans for better user immersion. And, with an expected compound annual growth rate of 43.8% from 2021 to 2028 (Grand View Research, February 2021), the AR market is predicted to continue growing at a rapid pace.

AR’s true potential goes beyond games. The technology could have a major cross-industry impact – from medical training to retail implementations, to business logistics and even interior design. A more accessible solution when compared to VR, augmented reality could be the biggest driver in the transition to the Metaverse.

GPUs, visual immersion, and overcoming data challenges

With decades of experience in creating innovative mobile GPU solutions for over 10 billion devices to date, Imagination recognises the vital role that graphics will play in creating truly immersive Metaverse worlds. Our recently announced IMG CXT GPU is the first solution to bring hardware ray tracing to mobile – all while being highly efficient from a hardware point of view.

In 2020 we established the Ray Tracing Levels System (RTLS), identifying six levels of existing and future solutions – ranging from Level 0 to Level 5. CXT operates at Level 4 on the RTLS, meaning that not only does it accelerate bounding volume hierarchy (BVH) ray traversal in hardware, but ray coherency sorting is also performed on-chip. This allows us to group rays that travel in the same direction and take advantage of the GPU’s parallel compute ability – a similar solution in principle to tile-based rendering, increasing overall ALU utilisation and testing efficiency.

In the short term, solutions such as CXT could allow companies to bring ray tracing to mobile AR environments, meaning more realistically rendered digital objects would be available across sectors and applications.

As XR hardware evolves, we will start to see hybrid processing solutions where mobile GPUs could be embedded in the headsets themselves. The power vs battery consumption balance will be more important than ever in this case. CXT has a unique advantage here as it is a scalable solution, enabling SoC manufacturers to reach up to 9TFLOPS of FP32 rasterised performance and over 7.2GRay/s of ray traced performance while offering up to 2.5x greater power efficiency than Level 2 and 3 RTLS solutions.

Of course, AR and VR technologies are just the tip of the iceberg when it comes to the Metaverse. The ecosystem itself will likely bring a lot of software and data processing challenges, as well as impressive innovations. However, GPUs have been successfully used to solve big data problems in recent years, as they can execute thousands of threads simultaneously, leading to higher output and memory bandwidth than traditional CPUs. GPUs can also unlock a new layer of efficiency when it comes to rendering digital worlds through innovations such as Imagination’s coherency sorting hardware. In other words, graphics processing units will play a significant role in how data will be managed in the Metaverse.

While the actual Metaverse will take a while to materialise, it is an exciting time to be in the industry. Competition breeds innovation and, with more and more players joining the race for this new online era, we’re bound to see some outstanding software, GPU, CPU, and AI developments along the way.

Share this post

About the Author
Picture of Kristof Beets

Kristof Beets is VP of Technology Insights at Imagination, where he drives the alignment of the technology roadmaps with market trends and customer needs as part of the IMG Labs Research organisation. He has a background in electrical engineering and received a master’s degree in artificial intelligence. Prior to joining the Labs Team he was part of the Business Development and Product Management Teams. Before this he worked on SDKs and tools for both PC and mobile products and ran the Competitive Analysis and Demo Teams as a member of the PowerVR Developer Relations and Ecosystem Teams. Kristof has written numerous articles and whitepapers which have been published in the ShaderX and GPU Pro Series of books and online by the Khronos Group, Beyond3D, 3Dfx Interactive and of course Imagination. Kristof has spoken at GDC, SIGGRAPH, Embedded Technology and MWC among others.

More from Kristof Beets

Read Next