Imagination Technologies launches autonomous driving processor

Imagination Technologies has launched its IMG Series4, a neural network accelerator (NNA) for advanced driver-assistance systems (ADAS) and autonomous driving.

The automotive industry is on the cusp of a revolution, with new use cases such as self-driving cars and robotaxis demanding new levels of artificial intelligence (AI) performance.

Series4 has already been licensed and will be available on the market in December 2020. It will be featuring a new multi-core architecture with an ultra-high performance of 600 tera operations per second (TOPS) and beyond, offering low bandwidth and exceptionally low latency for large neural network workloads.

Imagination’s low-power NNA architecture is designed to run full network inferencing while meeting functional safety requirements. Series 4 is a software that provides fine-grained control and increases flexibility through batching, splitting and scheduling of multiple workloads, which can now be exploited across any number of cores.

Also, it has an incredible ultra-high performance as it offers 12.5 TOPS per core at less than one watt and achieves performance that is over 20x faster than an embedded GPU and 1000x faster than an embedded CPU for AI inference.

Since all the cores are combined into a 2, 4, 6 or 8-core cluster, they can all be dedicated to executing a single task, reducing latency, and therefore response time.

Lastly, Series4 includes IP-level safety features and a design process that conforms to ISO 26262, which is the industry safety standard that addresses risk in automotive electronics, to help customers to achieve certification.

According to Andrew Grant, Senior Director, Artificial Intelligence, Imagination Technologies, the IMG Series 4 could become the industry-standard platform thanks to its superior technology.

“Wider adoption of neural networks will be an essential factor in the evolution from Level 2 and 3 ADAS, to full self-driving at Level 4 and Level 5,” he said. “These systems will have to cope with hundreds of complex scenarios, absorbing data from numerous sensors, such as multiple cameras and LiDAR, for solutions such as automated valet parking, intersection management and safely navigating complex urban environments.”

Exit mobile version