Bias-scalable analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications. For instance, ML implementations for server workloads are focused on higher computational throughput for faster training, whereas ML implementations for edge devices are focused on energy-efficient inference. In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC). The resulting S-AC core integrates several near-memory compute elements, which include: (a) non-linear activation functions; (b) inner-product compute circuits; and (c) a mixed-signal compressive memory, all of which can be scaled for performance or power while preserving its functionality. Using measured results from prototypes fabricated in a 180nm CMOS process, we demonstrate that the performance of computing modules remains robust to transistor biasing and variations in temperature. In this paper, we also demonstrate the effect of bias-scalability and computational accuracy on a simple ML regression task.
翻译:具有不同功率规格的机器学习(ML)处理器的可任意缩放模拟计算具有吸引力。例如,服务器工作量的 ML 实施侧重于更快培训的更高计算量,而边缘设备的 ML 实施侧重于节能推断。在本文中,我们展示了使用一般化的以形状为基础的模拟计算法(S-AC)应用可偏移的近似模拟计算电路。由此产生的 S-AC 核心整合了若干近模的计算要素,其中包括:(a) 非线性激活功能;(b) 内产品计算电路;以及(c) 混合信号压缩内存,所有这些都可以按性能或功率缩放,同时保持其功能。我们使用在180nm CMOS 进程中制造的原型的测量结果,我们证明计算模块的性能仍然对晶体偏差和温度变化具有很强的特性。在本文中,我们还展示了偏差可测量性和计算精确度对简单的 ML 回归任务的影响。