Conventional in-memory computing (IMC) architectures consist of analog memristive crossbars to accelerate matrix-vector multiplication (MVM), and digital functional units to realize nonlinear vector (NLV) operations in deep neural networks (DNNs). These designs, however, require energy-hungry signal conversion units which can dissipate more than 95% of the total power of the system. In-Memory Analog Computing (IMAC) circuits, on the other hand, remove the need for signal converters by realizing both MVM and NLV operations in the analog domain leading to significant energy savings. However, they are more susceptible to reliability challenges such as interconnect parasitic and noise. Here, we introduce a practical approach to deploy large matrices in DNNs onto multiple smaller IMAC subarrays to alleviate the impacts of noise and parasitics while keeping the computation in the analog domain.
翻译:常规模拟计算(IMC)结构包括模拟中间交叉条形结构,以加速矩阵-矢量倍增(MVM)和数字功能单位,以便在深神经网络(DNN)中实现非线性矢量(NLV)操作;然而,这些设计需要能量饥饿信号转换装置,这种装置可以耗尽系统总功率的95%以上;而在模拟分析计算(IMAC)电路中,通过在模拟域实现MVM和NLV操作,从而节省大量能源,消除信号转换器的需要;然而,它们更容易受到可靠性挑战,例如相互连接的寄生物和噪音;在此,我们采用一种实用的办法,在多小的IMAC亚阵列上部署大型DNS矩阵,以减轻噪音和寄生物的影响,同时将计算留在模拟域内。