Join Imagination and hundreds of computer vision professionals at the Embedded Vision Summit
This year’s Summit will be 100% online, where attendees will be able to watch presentations, ask questions of speakers, visit the virtual exhibit hall, see live demos of products and interact with vendors – all on a seamless virtual platform. This annual event brings together a global audience of companies developing leading-edge, vision-enabled products, including embedded systems, cloud solutions, and mobile applications. It’s the perfect way to keep connected to the products, technologies, people, and ideas that are building real products based on computer vision and visual AI.
Don’t miss the opportunity to listen to one of our experts present a new approach to neural network accelerator design that is particularly relevant to embedded systems and enables richer feature sets on platforms with tight performance, bandwidth, power, and area budgets. In this talk, James Imber, Senior Research Engineer of Imagination Technologies, will discuss a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. Our proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques
We look forward to seeing you there!
High-Fidelity Conversion of Floating Point Networks for Low-Precision Inference using Distillation with Limited Data
When converting floating point networks to low-precision equivalents for high-performance inference, the primary objective is to maximally compress the network whilst maintaining fidelity to the original, floating point network. This is made particularly challenging when only a reduced or unlabelled dataset is available. Data may be limited for reasons of a commercial or legal nature: for example, companies may be unwilling to share valuable data and labels that represent a substantial investment of resources; or the collector of the original dataset may not be permitted to share it for data privacy reasons. We present a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. Our proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques.
Dr James Imber, Senior Research Engineer, Imagination Technologies
James is a member of Imagination Technologies’ AI Research team, where he works primarily on neural network accelerators, compilers and low-precision inference targeting embedded systems. He has 9 years’ experience as a researcher in the semiconductor IP industry, during which time he has accumulated 24 granted patents and has contributed to publications in international computer vision conferences including ECCV and ICPR. Last year he received Electronics Weekly’s BrightSparks award for young engineers in recognition of his research and his work promoting STEM subjects to secondary school pupils in the UK. His research interests include image processing, ray tracing, machine learning and computer vision. He undertook his PhD studies at the University of Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP) on shape-assisted intrinsic image decomposition and holds a BEng from the University of Southampton in Electronic Engineering.