Embedded Vision Summit Virtual Conference 2021

Join Imagination and hundreds of computer vision professionals at the Embedded Vision Summit

We’re delighted to be able to offer you a 15% discount on registration – use Promo Code SUMMIT21PARTNER

Technical Insight Presentation 

Don’t miss the opportunity to listen to one of our experts present a new approach to neural network accelerator design that is particularly relevant to embedded systems and enables richer feature sets on platforms with tight performance, bandwidth, power, and area budgets. In this talk, James Imber, Senior Research Engineer of Imagination Technologies, will discuss a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. Our proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques

Title: High-Fidelity Conversion of Floating Point Networks for Low-Precision Inference using Distillation with Limited Data (more info click here)

Speaker: Dr James Imber, Senior Research Engineer, Imagination Technologies

Time: 26th May 11:00 PT

Abstract

When converting floating point networks to low-precision equivalents for high-performance inference, the primary objective is to maximally compress the network whilst maintaining fidelity to the original, floating point network. This is made particularly challenging when only a reduced or unlabelled dataset is available. Data may be limited for reasons of a commercial or legal nature: for example, companies may be unwilling to share valuable data and labels that represent a substantial investment of resources; or the collector of the original dataset may not be permitted to share it for data privacy reasons. We present a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. Our proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques.

Expert Bar – Live Q&A

Discussion Topic: Deep Neural Networks in Automotive (more info click here)

Speakers:

Gilberto Rodriguez, Director of AI product management at Imagination Technologies

Andrew Grant, Senior Director of artificial intelligence at Imagination Technologies

Time: Friday, May 28 from 11:30 am – 12:00 pm PT

Abstract: Come and get your questions answered about deep neural networks by experts from Imagination Technologies.

How are DNN’s changing automotive human-machine interfaces?  How are DNNs being used in automotive safety applications?  What are the options and trade-offs for accelerating DNNs in automotive applications?  How can automotive architects maximize flexibility, scalability and efficiency when deploying DNNs for ADAS and autonomy for mass production?