This article presents a flexible and general optimisation scheme for converting floating-point networks using Variable Bit Depth (VBD) compression as a step towards efficient, low-power inference.
Tag: AI Research
Designers of neural network accelerator (NNA) IP have a Herculean task on their hands: making sure that their product is sufficiently general to apply to a very wide range of current and future applications, whilst guaranteeing high performance. In the mobile, automotive, data centre and embedded spaces targeted by Imagination’s cutting-edge IMG Series4 NNAs, there even more stringent constraints on bandwidth, area and power consumption. The engineers at Imagination have found innovative ways to address these daunting challenges and deliver ultra-high-performance and future-proof IP.
If you have any enquiries regarding any of our blog posts, please contact:
Tel: +44 (0)1923 260 511