What Hardware Is Needed For Autonomous Vehicles?
The ability for cars to drive themselves has long been part of the popular imagination – from Herbie the lovable racing car in the 1970s Disney movies, to the sleek lines of KITT in Knight Rider. But thanks to advancements in modern technology, it is starting to become a reality.
However, the path to achieving this will not be easy.
Indeed, autonomous vehicles are incredibly demanding in terms of compute requirements. They need to be able to run neural networks for object identification, classification, and segmentation (where specific areas of a scene are identified) and then, additionally, run a series of computer vision algorithms to detect things such as white lines, road signs and other markers.
The car as an “edge-device”
In autonomous driving, the vehicle is an “edge device”, which needs to be able to process inputs from multiple cameras, including the forward-facing “up-to 4K” cameras which cover the main field of vision that the traditional driver would have through the windscreen.
In addition, it needs to be capable of processing the data coming from multiple sensors around the vehicle, arriving at different times. These would typically be radar, lidar and additional cameras; up to 16 for instance are included on many of these prototype vehicles currently on the road today.
All these inputs need to be routed safely and securely around the vehicle so that they can be passed to the electronic control units (ECUs) where the decision-making is going to take place. At present, there are multiple ECUs inside the vehicle performing many different tasks. However, in the future, these will likely fuse into a dedicated computer brain.
This process will take time to come to fruition, and in the short term, hybrid ECUs will be the norm. The compute tasks which are most associated with electric vehicles and autonomous vehicles are ideal for neural networks. These can be visual, such as convolutional neural networks (CNNs) and also other types of network, such as recurrent neural networks (RNNs).
The neural network accelerator (NNA) has become the engine of choice for designers and developers.
Importantly, all these neural networks require considerable compute capability, which is why vehicle manufacturers and the supply chain have turned to specialised chips to run them. Although historically some of these networks were run on CPUs, the neural network accelerator (NNA) has become the engine of choice for designers and developers.
This is because a neural network accelerator is an application-specific integrated circuit (ASIC), designed to run and schedule multiple neural networks very efficiently, often hundreds of times faster than an embedded CPU, providing benefits in both performance and power consumption.
Imagination’s automotive IP family
Imagination’s IP for accelerating ADAS and autonomous driving includes a range of neural network accelerators (NNAs). These are designed for high-performance with low latency, while consuming minimal bandwidth, making for highly power-efficient solutions that are ideal for today and tomorrow’s vehicles.
They also are designed to meet functional safety needs and can run in conjunction with Imagination’s graphics processing units (GPU) which, as parallel processing units, can flexibly run multiple tasks with conventional compute algorithms.
The car of today can have up to four terabytes of data circulating it and ensuring that this data gets to the right destination within the vehicle is essential. Imagination’s Ethernet Packet Processor (EPP) is responsible for distributing this data around the vehicle and, by meeting a need in the industry, it has become exceptionally in demand in recent years.
However, it may be truer to take this even further and say that because of the amount of compute power now contained in a vehicle that the car has become a data centre on wheels, enabling it to run the most demanding of neural networks and compute algorithms.
And if today’s car transports several terabytes of data around the vehicle, what will tomorrow bring? It certainly won’t be fewer data – it will be much more, requiring ever greater capability and control.
In summary, the importance of multiple NNAs, with additional GPUs and other Imagination IP such as the EPP, is critical to the future of automotive and the development of successful self-driving vehicles.
Hardware, software and artificial intelligence all form the foundation of delivering achievable autonomy on the road.