Analog computing is attractive compared to digital computing due to its potential for achieving higher computational density and higher energy efficiency. However, unlike digital circuits, conventional analog computing circuits cannot be easily mapped across different process nodes due to differences in transistor biasing regimes, temperature variations and limited dynamic range. In this work, we generalize the previously reported margin-propagation-based analog computing framework for designing novel \textit{shape-based analog computing} (S-AC) circuits that can be easily cross-mapped across different process nodes. Similar to digital designs S-AC designs can also be scaled for precision, speed, and power. As a proof-of-concept, we show several examples of S-AC circuits implementing mathematical functions that are commonly used in machine learning (ML) architectures. Using circuit simulations we demonstrate that the circuit input/output characteristics remain robust when mapped from a planar CMOS 180nm process to a FinFET 7nm process. Also, using benchmark datasets we demonstrate that the classification accuracy of a S-AC based neural network remains robust when mapped across the two processes and to changes in temperature.
翻译:与数字计算相比,模拟计算具有吸引力,因为它具有实现更高计算密度和更高能效的潜力。然而,与数字电路不同,传统的模拟计算电路无法轻易地在不同的流程节点上绘制。然而,由于晶体偏向制度、温度变化和有限的动态范围的差异,传统模拟计算电路无法轻易地在不同流程节点上绘制。在这项工作中,我们推广了先前报告的基于边际的模拟计算框架,用于设计新的ctextit{shape-类模拟计算}(S-AC)电路,这些电路可以很容易地跨过不同的流程节点。与数字设计S-AC设计相类似,S-AC设计也可以根据精确度、速度和功率进行缩放。作为概念的证明,我们展示了在机器学习(ML)结构中常用的S-AC电路功能的几个实例。我们使用电路模拟来证明,从Planar CMOS 180nm 进程到 FinFET 7nm进程绘图时,电路路/ 输出特性仍然很稳健。此外,我们使用基准数据集表明,在两个进程和温度变化中绘制时,基于S-AC神经网络的分类精准度网络的精准。