Companion chips: The intelligent choice for AI?

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

For many years, the semiconductor industry has strived towards tightly integrating more and more components into a single system-on-chip (SoC). After all, it is an entirely practical solution for high volume applications. By optimally positioning the various cores, memories and peripherals chip manufacturers are able to minimise data pathways, improve power efficiency and optimise for high performance, while significantly reducing costs. The industry has very much succeeded with this approach and the SoC is now a standard component of almost all our consumer electronics.

AI as standard

As companies begin to understand the potential of using neural networks for various tasks such as natural language processing, through to image classification the number of products introducing some element of artificial intelligence is steadily increasing. Meanwhile, processing for these tasks is migrating from cloud-based architectures to into the device itself, with dedicated hardware-based neural network accelerators now embedded into the SoCs themselves.

An AI chip. Woh.AI is being integrated into many SoCs

From voice-activated consumer electronics products such as virtual assistants, through to advanced driver-assistance systems (ADAS), the opportunity for integrated neural network-based AI is expanding across several market segments. Undeniably, AI is anticipated to become an essential element in many solutions.

One size doesn’t fit all

However, although the number of applications for AI is increasing, this doesn’t necessarily mean that SoCs with integrated AI acceleration is the way forward for all scenarios. Indeed, if we are to consider AI reaching across the majority of market segments, then fragmentation will naturally occur due to the fact that products using the technology have vastly different processing requirements. Fragmented markets are challenging to serve with dedicated SoCs, so a ‘one size fits all’ approach isn’t always applicable. While some markets, such as mobile phones or ADAS, promise high-volume opportunities for SoC vendors, many markets targeting the use of AI will naturally present as low-volume prospects. For example, some products may require AI for voice processing or image recognition, but not both; likewise, a smart home vendor is unlikely to use an SoC originally designed for smartphones just to embed AI capabilities into their control panel, as this would not be cost-effective.

Meet the AI companion chip

Multi-core chips are these days commonly found in desktop CPUs and mobile SoCs as their scalable architecture enables them to deliver performance on demand. An AI ‘companion chip’ would take a similar approach. They would be designed with not just one, but several GPU-compute and neural network accelerator (NNA) cores to provide sufficient performance for specific applications while ensuring silicon area is optimised, keeping chip costs to a minimum. These processors would sit alongside the main application processor (SoC) as a companion chip, offloading the AI inference tasks that would normally be handled by an NNA core on the main application processor.

The SoC vendor is now afforded opportunity to create a conventional generic application processor capable of cost-effectively servicing multiple markets while turning to an AI companion chip to expand the AI capabilities for targeted or niche applications.

From the OEM standpoint, they now have options to scale product solutions appropriately, dependent upon the AI processing overheads they expect to have to handle throughout their application.

Potential AI companion chip block layoutAn example AI processor: the number of NNAs would scale depending on the use case.

A typical companion AI SoC might include a generic control CPU for housekeeping tasks, a GPU core specifically designed for high-performance compute – as opposed to one devoted to handling graphics and 3D transform operations – plus several NNAs that may be combined as necessary to handle different neural networks and inference engines simultaneously, each using different levels of precision dependent upon the task in hand. For example, in a dual-NNA system, one NNA could be executing an image recognition task identifying faces in a scene before conveying the results to another NNA simultaneously decomposing the faces into individual features to identify expressions.

Another example might be in automotive. A six-core AI companion chip could be partitioned to identify road signs using three NNAs (each performing a different aspect of that task); while at the same time the other three would be dedicated to pedestrian detection. The number of NNAs and the distribution of the tasks would depend on the requirements of the application. This concept could then be expanded into a family of dedicated AI processors, each with a number of NNAs to address different performance points.

Cloud to ground

We’re already seeing dedicated AI chips in the cloud, such as Google’s TPU alongside Microsoft and Intel’s Project Brainwave using Stratix FPGAs as a solution. Today, these are mainly used for machine learning and training algorithms for AI.

An example of cloud AIA typical cloud-based AI solution – it’s massive!

However, not all devices are connected to cloud-based servers, and across a plethora of different markets, the industry acknowledges that at least some of the AI processing must be done on the device itself. Those markets are complex to serve, and, as we’ve discussed, one SoC doesn’t fit all. Vendors across the industry are already engaged in utilising neural networks for their particular requirements and the move to companion AI chips promises to be an exciting new step in the evolution of AI processing solutions at the edge.

The end result is that companion AI chips just might become more ubiquitous than anyone anticipated. Imagination has more than 25 years of building innovative cores for the semiconductor industry making it a reliable partner for this type of task. To learn about how PowerVR’s advanced GPUs and Neural Network Accelerator technologies help you create your next AI SoC, refer to our website or contact Imagination for further details.

Simon Forrest

Simon Forrest

A graduate in Computer Science from the University of York, Simon possesses over 20 years’ experience in broadcast television, radio and broadband technologies and is author of several patents in this field. Prior to joining Imagination, Simon held the position of Chief Technologist within Pace plc.

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom
Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

building shot

Imagination: Now and Beyond

Last year I was privileged to join Imagination as its chief executive officer and was delighted to join a company that has not only a distinguished history, but also incredible potential. With over 25 years of experience in designing and licensing market-leading IP processor solutions, we are today found in over 30% of the world’s mobile phones and, in total, over 11bn devices globally. Imagination is widely acknowledged as a company that creates innovative, high-quality technologies that solve complex problems for its customers. Across compute, graphics, and AI, it is razor-focused on delivering IP that provides high-performance and low power consumption in the smallest silicon area, thus empowering its partners to succeed, be it in mobile, consumer, automotive or other markets.

Read More »
ai edge

Reduced Operation Set Computing (ROSC) for Flexible, Future-Proof, High-Performance Inference

Designers of neural network accelerator (NNA) IP have a Herculean task on their hands: making sure that their product is sufficiently general to apply to a very wide range of current and future applications, whilst guaranteeing high performance. In the mobile, automotive, data centre and embedded spaces targeted by Imagination’s cutting-edge IMG Series4 NNAs, there even more stringent constraints on bandwidth, area and power consumption. The engineers at Imagination have found innovative ways to address these daunting challenges and deliver ultra-high-performance and future-proof IP.

Read More »
James Liu keynote

Imagination China sees 2020 out in award-winning style with IMG Series 4 NNA

Earlier this month Imagination Technologies picked up two awards for its recently launch IMG Series 4 neural network accelerator (NNA). Our NNA is no stranger to awards, with the design team behind the Series2 NX and the NNA itself picking up gongs for its groundbreaking performance. The Series 4 takes things to new levels, with its scalable architecture offering 500 TOPS of performance, and even beyond.

Read More »


Sign up to receive the latest news and product updates from Imagination straight to your inbox.