Introducing the PowerVR CLDNN SDK

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

Recently, we released our first AI-orientated SDK – the PowerVR CLDNN. In this post we’d like to explain what it is, what it’s for, and how to use it, so read on for more.

The CLDNN API

The PowerVR CLDNN API enables fast, efficient development and deployment of convolutional neural networks on PowerVR devices. The API can be used to generate highly optimised graphs and OpenCL™ kernels based on your network architecture. This means that developers can focus on fine-tuning the network architecture without the need for in-depth OpenCL knowledge. The API also performs low-level hardware-specific optimisations, enabling the generation of more efficient graphs than a custom user OpenCL implementation leading to higher performance (inferences per second).
CLDNN sits on top of OpenCL – but does not obscure it. It makes use of OpenCL constructs so that it can be used alongside other custom OpenCL code. It also uses standard OpenCL memory, so it can be used alongside standard OpenGL ES™ contexts.
a neural network

The CLDNN SDK

Our PowerVR CLDNN SDK is used to demonstrate how a neural network can be deployed to PowerVR hardware through the PowerVR CLDNN API. It includes various helper functions such as file loading, dynamic library initialisation and OpenCL context management. There is also documentation in the form of a PowerVR CLDNN reference manual, which explains all of the CLDNN API’s functions.
We have included the source code for sample applications that show how to use the PowerVR CLDNN API. These include a simple introduction to the API, a more complex number classification example, and finally, an image classification example. The examples show how to deploy the “LeNet” and “AlexNet” neural network architectures, both of which are popular well-known neural network architectures, all using the PowerVR CLDNN API.
We have created an image that developers can flash to an Acer Chromebook R-13, which has a PowerVR GX6250 GPU. This is the only way to make full use of the SDK at this time.
Acer Chromebook R-13

You’ll need one of these using the image we provide to make use of the PowerVR CLDNN SDK 

Demo

We also have a demo available to run on the Acer Chromebook R-13 image. It takes a live camera feed and identifies the object the camera is pointing at. A camera frame is passed to the CNN, and a label is output on the screen along with a confidence percentage, indicating how sure the network is of its response to the input image.
The demo implements well-known network models including:

Each network has different characteristics, meaning different networks may perform better in different scenarios. The key high-level characteristics are the number of operations and memory usage, which directly influence the speed and accuracy of the network. All of the network implementations in use in the demo are Caffe models, trained against the ImageNet dataset and there is a benchmark function within the demo.

What’s Next?

Developers who get to grips with our PowerVR CLDNN will find themselves in a great position with the future release of our Series2NX hardware and associated APIs. Future APIs are likely to be very similar to CLDNN.

Further Information

If you have any further questions visit our Contact page for further details on how to get in touch with us.

To keep up with the latest from Imagination, follow us on social media on Twitter @ImaginationTech and on LinkedInFacebook and Google+.

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

[email protected]
Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

Brain neural

Learning from the best: How nature can inspire AI research

Artificial neural networks (ANNs) make up a significant portion of machine learning and deep learning, ranging from simple, early networks consisting of only a handful of neurons, to more recent networks with hundreds of billions of parameters (e.g., GPT-3). Despite the clear success of ANNs, there is clearly much that we can still learn from biological systems, which have evolved an amazing variety of solutions to solve the same challenges that AI engineers face.

Read More »
shutterstock 453232675

Imagination’s neural network accelerator and Visidon’s denoising algorithm prove to be perfect partners

This blog post is a result of a collaboration between Visidon, headquartered in Finland and Imagination, based in the UK. Visidon is recognised as an expert in algorithms for camera image enhancement and analysis and Imagination has a series of world-beating neural network accelerators (NNA) with performance up to 100 TOPS per second per core. The problem tackled in this blog post is denoising images from conventional colour cameras. The solution is in two parts: 1. Algorithms that remove the noise without damaging image detail. 2. A high-performance convolution engine capable of running a trained neural network that takes a colour image as input and outputs a denoised colour image.

Read More »
Neural network accelerator (NNA) for the automotive industry

Imagination and Humanising Autonomy Part 2: The humans behind the autonomy

Welcome to the second in a series of articles where we explore how Imagination and Humanising Autonomy, a UK-based AI behaviour company, are teaming up to deliver practical, real-world AI-driven active safety. This time we talk to Ron Pelley, Vice President, Commercial at Humanising Autonomy about their mission, mantras, and the unlikely meeting that started it all.

Read More »

Connect

Sign up to receive the latest news and product updates from Imagination straight to your inbox.