By supporting the access of multiple memory words at the same time, Bit-line Computing (BC) architectures allow the parallel execution of bit-wise operations in-memory. At the array periphery, arithmetic operations are then derived with little additional overhead. Such a paradigm opens novel opportunities for Artificial Intelligence (AI) at the edge, thanks to the massive parallelism inherent in memory arrays and the extreme energy efficiency of computing in-situ, hence avoiding data transfers. Previous works have shown that BC brings disruptive efficiency gains when targeting AI workloads, a key metric in the context of emerging edge AI scenarios. This manuscript builds on these findings by proposing an end-to-end framework that leverages BC-specific optimizations to enable high parallelism and aggressive compression of AI models. Our approach is supported by a novel hardware module performing real-time decoding, as well as new algorithms to enable BC-friendly model compression. Our hardware/software approach results in a 91% energy savings (for a 1% accuracy degradation constraint) regarding state-of-the-art BC computing approaches.
翻译:通过支持同时存取多个记忆单词, Bitline Economic (BC) 结构允许同时执行比特方法操作。 在阵列边缘, 算术操作随后产生, 几乎没有额外的间接费用。 这种模式为边缘的人工智能(AI) 开辟了新的机会, 这是因为记忆阵列中固有的巨大平行主义以及当下计算产生的极高的能量效率, 从而避免数据传输。 以前的工程显示, BC 在针对AI工作量时带来破坏性效率收益, 这是新兴的边缘AI假想中的关键指标。 这份手稿以这些结果为基础, 提出了一个端到端框架, 利用BC特定优化使AI模型的高度平行和激烈压缩。 我们的方法得到了一个新型硬件模块的支持, 进行实时解码, 以及新的算法, 使BC对模型进行方便。 我们的硬件/ 软件方法导致在新工艺的 BC 计算方法方面节省了91%的能源( 1% 精确降解限制 ) 。