Keeping an eye on AI: smart cameras and intelligence at the edge

Share on linkedin
Share on twitter
Share on facebook
Share on reddit
Share on digg
Share on email

Recently, we ran a webinar entitled, “Enabling efficient implementation of neural networks in smart cameras”. If you missed it, it’s worth checking out, as we take a close look at the smart camera market and the need to embed neural network accelerator in edge devices.

smart cameras webinar
For those with an eye on the industry, this move to the placing intelligence in edge devices is notable as a few years ago it seems ‘the cloud’ was everything. It was the buzzword that never seemed to go away. And with good reason: off-loading computation to an external device offsets the cost of many activities that need to be performed in an efficient manner, and with a large number of established cloud service providers in the market, there’s no shortage of choice. So, surely that’s all wrapped up? Why are we even talking about placing processing back into the edge device? Isn’t that a backward step? Can’t we rely on the cloud for everything?

Bandwidth, privacy, latency

Well, it’s not as simple as that. Many applications can’t function unless you’re connected to the cloud, creating a lot of data and requiring a lot of bandwidth. That places a strain on networks, especially when it might turn out that we don’t require all of that data (for example, why send pictures of empty roads?). It’s far more efficient to deal with it at source.

Then there are the privacy concerns. The use of the term, ‘hacker’ may be a bit well, hackneyed, but our smart speakers and smart cameras are capturing a huge amount of data, which is inherently private and sensitive. By analysing that locally and only sending what it needed to the cloud you’re removing a huge surface area of attack for those looking to snoop or steal.

drones neural networks
Even with the low latencies promised by 5G, for devices that rely on near instant decision making, relying on the cloud will introduce an unacceptable delay: if your drone is travelling through a crowded environment, it needs to be able to process and negotiate obstacles pretty much immediately: if not it won’t be able to travel at speed. Then there’s the classic current use case: autonomous cars. It’s a no-brainer to understand that when someone has just stepped out in front of the vehicle; it can’t wait for a communication from a cloud that might or might not be there – it needs to make a decision instantaneously.

Pattern spotters

In the webinar, we touch upon some of the use cases for AI in smart cameras, and it got me thinking about some of the other interesting possibilities and use cases. There’s certainly seems to be no stopping AI from infiltrating every area of our life. It’s even possible that AI will one day be able to write this post, (no doubt some would argue that this would be an improvement!).
Already companies such as Reuters are investing heavily in AI-supported journalism through a tool called Lynx Insight – though the key word here is supported. The AI doesn’t write the stories, but churns through large data sets and spots something unusual or interesting and then flags up that information to a journalist – such as noting that a stock price has moved sharply, or other changes in a given market. After all, neural networks are highly adept at spotting these sorts of patterns faster than a human, but it’s only the human that can explain what matters.

smart cameras security monitoring
Moving specifically to smart cameras, there are interesting use cases in a number of categories, both commercial and consumer.

Starting with the former, you have cameras that can do things that you would expect, such as identify license plates. The next step for this would be for the camera to recognise the whole car or even the occupants automatically – ideal for security at an airport. Certainly, it’s already possible for a person to be picked out from a huge crowd by smart cameras analysis as this individual from China discovered.

There’s also the ability to recognise an abandoned package – noting when something has been placed and then suspiciously discarded – again with obvious benefits for security in a busy public place such as an airport.

Shopping enhancers

Retail analytics is another important area. Amazon’s checkout-free stores (as in there are no checkouts – the products are not free), where shoppers can ‘grab-and-go’ in that they can pick up what they want and walk out with it. Cameras identify a person and then they are automatically charged for what they buy.
Amazon Go
Then there are fast food restaurants in China that use cameras to make menu suggestions based upon your age and gender, and then there are systems that will enable planners to optimise a store layout by tracking customer movements. You could even recognise a VIP or high-spending customer and offer them personal service and attention. Suddenly, the personal advertising scene in Minority Report doesn’t seem so far-fetched.

Keeping a lookout

You won’t be able to escape the cameras even in the car. In fact, cockpit cameras are only going to be on the rise, having a key role to play in Advanced Driver Assistance Systems (ADAS), and specifically driver monitoring. Gaze tracking is used to ensure the driver is awake and paying attention to the road, or if the driver is intoxicated to disable the vehicle or engage the autonomous driving mode, if safe. It will also help with the smooth handover between autonomous and driver-controlled states, by assessing the attentiveness of the driver.
in-car driver monitoring

Big brother

And then there’s the home. For many people the home is the place where adding intelligence at the edge will have the most tangible impact.

As the technology becomes very affordable, more of us will become accustomed to having cameras in the home for security and peace of mind. However, the way many of these work now is relatively primitive. They can alert us when they detect motion but the camera itself, or rather the supporting software, has no understanding of what it sees. The new generation of cameras will be able to recognise family members, and do ‘smart stuff’ such as sending a notification when someone has returned home or departed, or warning when the kids are running amock! We will also see that intelligence applied to the footage so you can ask using your voice about a specific incident, and the software will show you those moments image. For example, you can say, “let me know if the kids aren’t home by four.”

However, as this review proves, it’s still early days for this sort of intelligence to really work, taking a couple of weeks to learn someone’s face and provide useful information. You also have to pay significant fees for a cloud service to enable those ‘smarts’. Wouldn’t it be better if you could have that work done locally? It might cut off a potential revenue stream but if much or most of the work could be achieved locally it would be much more efficient, saving time, and money in terms of both bandwidth and power.

Entirely AI-powered cameras such as Google Clip have received mixed reviews but it’s early days for the technology

We also have devices such as the Amazon Look, which uses a camera to analyse your clothing and make recommendations based on machine learning.  Then there are clip-on cameras that have ‘intelligence’ entirely built in that are designed to recognise when you or your family are doing something interesting and take pictures entirely automatically: literally, an AI-powered camera. Again, the review notes that it doesn’t work very well, but this is just the start: with better, speedier, more power-efficient designs it’s surely a matter of time before the algorithms improve and the technology is able to achieve better results.

Enhance, enhance

There are a plethora of creative uses to which neural networks can be applied. They now identify people and objects in photos as a matter of course. Taken any pictures of a cat or a dog? Type cat or dog into your phone’s photo app search bar and see what you get. And while the cameras in our phones are getting inherently better, with more sensitivity to light and better processing, apps such as Phancer will take your regular photos and elevate them to DSLR level results – more photography ‘cheats’, such as the fake bokeh effects that many high-end cameras now offer, thanks to the power of the neural networks.


It’s very apparent then that the use cases of neural networks in edge devices, and specifically cameras, is wide, but that it’s very much early days for the technology. Imagination certainly stands ready for this new age, and its PowerVR Series 2NX hardware is an ideal solution for delivering these solutions, offering very high performance and very low power consumption. To find out more about the subject we’d definitely check out the webinar, and be sure to take a look at our blog posts on the Series2NX architecture and the two recently released cores, the PowerVR AX2185 and the PowerVR AX2145.

Benny Har-Even

Benny Har-Even

With a background in technology journalism stretching back to the late 90s, Benny Har-Even has written for many of the top UK technology publications, across both consumer and B2B and has appeared as an expert on BBC World Business News and BBC Radio Five Live. He is now Content Manager at Imagination Technologies.

Please leave a comment below

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted. We respect your privacy and will not publish your personal details.

Blog Contact

If you have any enquiries regarding any of our blog posts, please contact:

United Kingdom

[email protected]
Tel: +44 (0)1923 260 511

Search by Tag

Search by Author

Related blog articles

Brain neural

Learning from the best: How nature can inspire AI research

Artificial neural networks (ANNs) make up a significant portion of machine learning and deep learning, ranging from simple, early networks consisting of only a handful of neurons, to more recent networks with hundreds of billions of parameters (e.g., GPT-3). Despite the clear success of ANNs, there is clearly much that we can still learn from biological systems, which have evolved an amazing variety of solutions to solve the same challenges that AI engineers face.

Read More »
shutterstock 453232675

Imagination’s neural network accelerator and Visidon’s denoising algorithm prove to be perfect partners

This blog post is a result of a collaboration between Visidon, headquartered in Finland and Imagination, based in the UK. Visidon is recognised as an expert in algorithms for camera image enhancement and analysis and Imagination has a series of world-beating neural network accelerators (NNA) with performance up to 100 TOPS per second per core. The problem tackled in this blog post is denoising images from conventional colour cameras. The solution is in two parts: 1. Algorithms that remove the noise without damaging image detail. 2. A high-performance convolution engine capable of running a trained neural network that takes a colour image as input and outputs a denoised colour image.

Read More »
Neural network accelerator (NNA) for the automotive industry

Imagination and Humanising Autonomy Part 2: The humans behind the autonomy

Welcome to the second in a series of articles where we explore how Imagination and Humanising Autonomy, a UK-based AI behaviour company, are teaming up to deliver practical, real-world AI-driven active safety. This time we talk to Ron Pelley, Vice President, Commercial at Humanising Autonomy about their mission, mantras, and the unlikely meeting that started it all.

Read More »


Sign up to receive the latest news and product updates from Imagination straight to your inbox.