What Software Is Needed For Autonomous Vehicles?


To say that software is mission-critical in an autonomous vehicle is almost an understatement. It’s the differentiating factor between one vehicle and the next in terms of capability, performance and self-driving experience. It’s also the quality of, and interaction with, the software that gets you to your destination safely and swiftly.

The latest autonomous vehicles are fully featured artificial intelligence (AI) on wheels and can be referred to as equivalent to a data centre on wheels too. They use AI to understand their environment, so they know what’s going on around them, to recognise objects and classify them as a person, car, truck, moving, stationary and so on.

The AI then has to predict what will happen next, and then pass this information to the decision model to decide what course of action to take. This calculates the safe driving “box” – that is the space in which it is safe for the vehicle to move into.

See – think – do

As humans, we go through a “see-think-do” approach almost without conscious thought. We see or sense something (perception), think (evaluate options available to us and weigh their outcomes) and then we “do” – that is we take an action.

The compute engines in vehicles have to go through a similar process; they use their sensors (cameras, lidar, radar) to predict movement paths and evaluate options (decide) and then they issue an instruction for course correction (action).

The advantage a vehicle has is that it can do this in milliseconds, whereas a human takes a little time. The vehicle also has 360° vision and, unlike a human, no blind spots, and, again unlike a human, it is always paying attention – it won’t get distracted by something at the side of the road or a text coming in.

The role of neural networks

This AI ability is made possible by the advent and widespread adoption of neural networks which are excellent at recognising objects and classifying them. Through computer vision algorithms, cars can also operate white line monitoring effectively. These abilities are then built into a ruleset of what to do under specific circumstances – the self-driving model.

This model is trained using thousands of driving hours and millions of miles of real and simulated roads and is the “special sauce” of each autonomous vehicle software company. Their ability to train these models (in the data centre) using real road camera footage and simulated environments is essential.

These simulations are akin to realistic video games (and are created by the same video game companies in many cases), meaning that the cars can encounter both everyday events and unusual occurrences, preparing them for the real thing on the roads.

For autonomous driving software, the story and the journey, is just beginning!

In terms of software models, these involve training models (in “training mode”) and adjusting them for accuracy over a period of months, constantly evolving and improving the software and then running them on the vehicle for inferencing. The neural networks are run multiple times a second to inference or “compute” what it is seeing.

These neural networks are known as convolutional neural networks (CNNs) and they can detect, classify and segment, (the latter meaning separate pavement from the road, for instance). In addition, other types of networks such as recurrent neural networks (RNNs) can be used, and this family includes many types of networks that involve loops and can be temporally based.

Imagination’s IP includes both GPUs for training and inference and the award-winning neural network accelerator (NNA). The NNA features the ability to schedule and run multiple different neural networks across its multi-cores or run one large network across all the cores in a multi-core resulting in low-latency and ultra-high performance.

The future of autonomous software

Excitingly, the software we are talking about is constantly evolving, either by being manually tuned or running in shadow mode, where it runs in the background, comparing its own decisions to those a human makes while they drive a real vehicle. This increases the accuracy of the software models when they are exported for real-world use on the vehicle itself.

For autonomous driving software, the story and the journey, is just beginning!