Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic. In contrast to conventional representation schemes used within the binary domain, the sequence of bit streams in the stochastic domain is inconsequential, and computation is usually non-deterministic. In this brief, we exploit the stochasticity during switching of probabilistic Conductive Bridging RAM (CBRAM) devices to efficiently generate stochastic bit streams in order to perform Deep Learning (DL) parameter optimization, reducing the size of Multiply and Accumulate (MAC) units by 5 orders of magnitude. We demonstrate that in using a 40-nm Complementary Metal Oxide Semiconductor (CMOS) process our scalable architecture occupies 1.55mm$^2$ and consumes approximately 167$\mu$W when optimizing parameters of a Convolutional Neural Network (CNN) while it is being trained for a character recognition task, observing no notable reduction in accuracy post-training.
翻译:存储计算(SC)是一种计算模式,它允许使用随机位流和数字逻辑对各种算术操作进行低成本和低功率的计算。与在二进制域内使用的常规代表方案不同,在随机域内,位流的顺序是无关紧要的,计算通常不具有确定性。在本摘要中,我们利用在转换概率操控连接内存(CBRAM)装置时的随机随机性,以高效生成随机位流,以完成深度学习(DL)参数优化,将倍增和累积(MAC)单元的大小减少5级。我们证明,在使用40纳米辅助金属氧化半导体(CMOS)处理可缩放结构过程中,在优化进化神经网络(CNN)参数时,我们花费了大约167美元,同时正在培训其进行字符识别任务,没有观察到精确度在后培训后降低。