Embracing Flexibility in Edge AI

Whether you require smartness in the palm of your hand, behind your TV, or in the robots in your factory, Imagination’s AI and compute solutions can deliver on your vision. We engineer pioneering IP that enables AI at the edge based on developer-friendly software and highly capable, power efficient and programmable hardware accelerators.

Discover how Imagination is helping customers navigate to success in our AI at the Edge statement.

Hardware and software solutions that scale

Imagination IP accelerates compute workloads at the edge with low power and low cost. We focus on delivering exceptional general-purpose solutions across everything from accelerator hardware through to AI frameworks to allow our customers to scale their projects as the amount of computation available at the edge increases.
AI Icon

AI on RISC-V CPUs

Imagination’s Catapult RISC-V CPUs are designed to support the growing demand for AI compute with support for vector operations, compute libraries to boost utilisation, features for fast data coupling with AI accelerators and improved AI workload performance when operating alongside an Imagination GPU.
Find out more
AI Icon

AI on GPUs

Imagination GPUs are highly valued edge accelerators. In a power and area constrained environment, they can handle both compute and graphics workloads simultaneously. They are highly programmable and give software developers the performance and tools to innovate with compute at the edge.
Find out more
AI Icon

AI Software

Imagination’s compute software solutions empower all stakeholders in the developer journey with a fit-for-purpose “functional to performant to optimal” workflow. We support the creation of open standards and vendor-neutral programming models by working closely with organisations like the UXL Foundation.
Find out more

Experienced in AI

Our engineering teams have extensive experience in designing semiconductor technology for compute and AI, starting with the NNA product line (optimised for CNN style workloads) which is currently shipping in numerous SoCs across the automotive and consumer markets, for example in the DAMO XuanTie TH1520 SoC.

Our engineers are now developing a new generation of general-purpose accelerators to handle the latest models. Book a meeting with the Sales team to see what is on the roadmap.

Speak with our experts
automotive icon

Automotive

The journey from driver assistance to automated driving is centred on the evolution of the capabilities of AI – both on the hardware and the software side. From driver monitoring to path management, today’s cars are growing in intelligence supported by flexible, scalable and programmable accelerators from Imagination.

Learn more

Desktop

Generative AI is transforming business productivity and creative processes. A new era of AI laptops are emerging with new levels of compute power – but energy efficiency is still an essential feature. Imagination’s high-performance GPUs with DirectX are perfect for delivering AI-on-the-move.

Learn more

Mobile

Whether it’s removing an unwanted person from a selfie, adjusting the resolution of a favourite picture or delivering a high-quality voice-based user interface, AI on mobile is here to stay and requires exceptionally capable processors that can handle complex AI algorithms with a limited power budget. Imagination’s energy efficient processors are synonymous with smartphones and our technology can support OEMs find differentiation through advanced AI features.

Learn more
dtv icon

Consumer

On popular consumer devices from DTV to the Smart Home Hub, edge-based AI workloads are being introduced so devices can stand out from the competition, from gesture recognition to natural language processing. But this hardware market is cost-sensitive and needs to deliver new features within a constrained silicon area. Imagination’s CPU and GPU IP for consumer devices pack a huge amount of performance and AI features into a small package.

Learn more

How AI acceleration is transforming technology with the UXL Foundation

Learn more

“The technology that is improving our day-to-day experiences of driving, entertainment, healthcare and more, is increasingly data-intensive and complexA new open and collaborative approach to computing is required to provide the necessary acceleration in an efficient and performant manner, whether in the cloud or at the edge. As a founding member of the Unified Acceleration Foundation, Imagination Technologies will help unite the technology ecosystem around the oneAPI spec and encourage its widespread adoption for compute and AI acceleration.”

Shreyas Derashri

Vice President of Compute Product Management at Imagination Technologies

Frequently asked questions

AI Accelerator chips come under many names such as Neural Network Accelerators (NNAs), Neural Processing Units (NPUs) and Machine Learning Engines. They are specialised processors designed to handle the complex computations required for artificial intelligence (AI) applications. Some examples of products that use AI accelerator chips include:

  • Smartphones: Many high-end smartphones, such as the iPhone 12 and Samsung Galaxy S21, use AI accelerator chips to power features such as facial recognition, voice recognition, and augmented reality.
  • Smart home devices: Smart home devices such as Amazon Echo and Google Nest Mini 2nd gen use AI accelerator chips to process voice commands and provide intelligent responses.
  • Self-driving cars: Autonomous vehicles use AI accelerator chips to process sensor data and make real-time decisions based on the surrounding environment. Read more about AI in self-driving cars.

AI processors offer several benefits over traditional processors, including:

Faster performance: AI processors are designed to handle the complex computations required for AI workloads, such as deep learning and machine learning, much faster than traditional processors. This allows for more efficient processing of large datasets and faster training of AI models.

Energy efficiency: AI processors are optimised for processing large amounts of data in parallel, which can be done more energy-efficiently than traditional processors. This means that AI workloads can be processed more quickly and with less energy consumption helping companies achieve net zero.

Improved accuracy: AI processors are designed to handle the specific computations required for AI workloads, which can lead to improved accuracy in AI models. This is especially important in applications such as image recognition or natural language processing, where accuracy is critical.

Scalability: AI processors can be scaled more easily than traditional processors, which allows for faster processing of larger datasets and more complex AI models. This makes it possible to train and deploy AI models more quickly and efficiently.

Specialised design: AI processors are designed specifically for AI workloads, which means they can perform computations that would be difficult or impossible for traditional processors. This opens up new possibilities for AI applications, such as real-time object detection or speech recognition.

Overall, the benefits of AI processors make them essential for many AI applications, from self-driving cars to voice assistants to medical diagnosis tools.

Related Content
AI at the Edge – Blog Post

Advanced compute technologies are now commonplace tools for boosting productivity and transforming our day-to-day experiences.

Read more