MR BILL LU, CEO – ZBIT SEMICONDUCTOR INC. | Topic: A NOR Flash Based Convolution Computation for On-device AI Inference Applications
More and more artificial intelligence (AI) applications demand massive on device parallel computing power or local inference capabilities. Computing in Memory (CIM) eliminates the need of CPU/GPU to fetch data from memory, thus the computing power is greatly improved and the power consumption is dramatically reduced. This presentation will introduce a new approach for convolution operations using the NOR Flash technology from Zbit Semiconductor, Inc. A test chip simulation shows that the power consumption could be reduced by 500X-1000X and the overall chip cost reduction is more than 30X-50X. This on-device inference approach can be widely used for off-line human-machine language interaction, Drone and robots, battery driven surveillance camera and cell phone pattern recognition applications.