Companion chips: The intelligent choice for AI?

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

For many years, the semiconductor industry has strived towards tightly integrating more and more components into a single system-on-chip (SoC). After all, it is an entirely practical solution for high volume applications. By optimally positioning the various cores, memories and peripherals chip manufacturers are able to minimise data pathways, improve power efficiency and optimise for high performance, while significantly reducing costs. The industry has very much succeeded with this approach and the SoC is now a standard component of almost all our consumer electronics.

AI as standard

As companies begin to understand the potential of using neural networks for various tasks such as natural language processing, through to image classification the number of products introducing some element of artificial intelligence is steadily increasing. Meanwhile, processing for these tasks is migrating from cloud-based architectures to into the device itself, with dedicated hardware-based neural network accelerators now embedded into the SoCs themselves.

An AI chip. Woh.AI is being integrated into many SoCs

From voice-activated consumer electronics products such as virtual assistants, through to advanced driver-assistance systems (ADAS), the opportunity for integrated neural network-based AI is expanding across several market segments. Undeniably, AI is anticipated to become an essential element in many solutions.

One size doesn’t fit all

However, although the number of applications for AI is increasing, this doesn’t necessarily mean that SoCs with integrated AI acceleration is the way forward for all scenarios. Indeed, if we are to consider AI reaching across the majority of market segments, then fragmentation will naturally occur due to the fact that products using the technology have vastly different processing requirements. Fragmented markets are challenging to serve with dedicated SoCs, so a ‘one size fits all’ approach isn’t always applicable. While some markets, such as mobile phones or ADAS, promise high-volume opportunities for SoC vendors, many markets targeting the use of AI will naturally present as low-volume prospects. For example, some products may require AI for voice processing or image recognition, but not both; likewise, a smart home vendor is unlikely to use an SoC originally designed for smartphones just to embed AI capabilities into their control panel, as this would not be cost-effective.

Meet the AI companion chip

Multi-core chips are these days commonly found in desktop CPUs and mobile SoCs as their scalable architecture enables them to deliver performance on demand. An AI ‘companion chip’ would take a similar approach. They would be designed with not just one, but several GPU-compute and neural network accelerator (NNA) cores to provide sufficient performance for specific applications while ensuring silicon area is optimised, keeping chip costs to a minimum. These processors would sit alongside the main application processor (SoC) as a companion chip, offloading the AI inference tasks that would normally be handled by an NNA core on the main application processor.

The SoC vendor is now afforded opportunity to create a conventional generic application processor capable of cost-effectively servicing multiple markets while turning to an AI companion chip to expand the AI capabilities for targeted or niche applications.

From the OEM standpoint, they now have options to scale product solutions appropriately, dependent upon the AI processing overheads they expect to have to handle throughout their application.

Potential AI companion chip block layoutAn example AI processor: the number of NNAs would scale depending on the use case.

A typical companion AI SoC might include a generic control CPU for housekeeping tasks, a GPU core specifically designed for high-performance compute – as opposed to one devoted to handling graphics and 3D transform operations – plus several NNAs that may be combined as necessary to handle different neural networks and inference engines simultaneously, each using different levels of precision dependent upon the task in hand. For example, in a dual-NNA system, one NNA could be executing an image recognition task identifying faces in a scene before conveying the results to another NNA simultaneously decomposing the faces into individual features to identify expressions.

Another example might be in automotive. A six-core AI companion chip could be partitioned to identify road signs using three NNAs (each performing a different aspect of that task); while at the same time the other three would be dedicated to pedestrian detection. The number of NNAs and the distribution of the tasks would depend on the requirements of the application. This concept could then be expanded into a family of dedicated AI processors, each with a number of NNAs to address different performance points.

Cloud to ground

We’re already seeing dedicated AI chips in the cloud, such as Google’s TPU alongside Microsoft and Intel’s Project Brainwave using Stratix FPGAs as a solution. Today, these are mainly used for machine learning and training algorithms for AI.

An example of cloud AIA typical cloud-based AI solution – it’s massive!

However, not all devices are connected to cloud-based servers, and across a plethora of different markets, the industry acknowledges that at least some of the AI processing must be done on the device itself. Those markets are complex to serve, and, as we’ve discussed, one SoC doesn’t fit all. Vendors across the industry are already engaged in utilising neural networks for their particular requirements and the move to companion AI chips promises to be an exciting new step in the evolution of AI processing solutions at the edge.

The end result is that companion AI chips just might become more ubiquitous than anyone anticipated. Imagination has more than 25 years of building innovative cores for the semiconductor industry making it a reliable partner for this type of task. To learn about how PowerVR’s advanced GPUs and Neural Network Accelerator technologies help you create your next AI SoC, refer to our website or contact Imagination for further details.

Simon Forrest

Simon Forrest

A graduate in Computer Science from the University of York, Simon possesses over 20 years’ experience in broadcast television, radio and broadband technologies and is author of several patents in this field. Prior to joining Imagination, Simon held the position of Chief Technologist within Pace plc.

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

[email protected]
Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

shutterstock 453232675

Imagination’s neural network accelerator and Visidon’s denoising algorithm prove to be perfect partners

This blog post is a result of a collaboration between Visidon, headquartered in Finland and Imagination, based in the UK. Visidon is recognised as an expert in algorithms for camera image enhancement and analysis and Imagination has a series of world-beating neural network accelerators (NNA) with performance up to 100 TOPS per second per core. The problem tackled in this blog post is denoising images from conventional colour cameras. The solution is in two parts: 1. Algorithms that remove the noise without damaging image detail. 2. A high-performance convolution engine capable of running a trained neural network that takes a colour image as input and outputs a denoised colour image.

Read More »
Neural network accelerator (NNA) for the automotive industry

Imagination and Humanising Autonomy Part 2: The humans behind the autonomy

Welcome to the second in a series of articles where we explore how Imagination and Humanising Autonomy, a UK-based AI behaviour company, are teaming up to deliver practical, real-world AI-driven active safety. This time we talk to Ron Pelley, Vice President, Commercial at Humanising Autonomy about their mission, mantras, and the unlikely meeting that started it all.

Read More »


Sign up to receive the latest news and product updates from Imagination straight to your inbox.