The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features. These include network architecture, capacity, weight distribution, and the type of inter-layer connections. Techniques are continuously emerging to efficiently train sparse neural networks, which may have activation sparsity, quantization, and memristive noise. In this paper, we present an extended Design Space Exploration (DSE) methodology to quantify the benefits and limitations of dense and sparse mapping schemes for a variety of network architectures. While sparsity of connectivity promotes less power consumption and is often optimized for extracting localized features, its performance on tiled RRAM arrays may be more susceptible to noise due to under-parameterization, when compared to dense mapping schemes. Moreover, we present a case study quantifying and formalizing the trade-offs of typical non-idealities introduced into 1-Transistor-1-Resistor (1T1R) tiled memristive architectures and the size of modular crossbar tiles using the CIFAR-10 dataset.
翻译:在混合信号阻力随机存取存储器(RRAM)加速器中,装置和电路效应的影响通常表现为深学习算法的性能退化,但影响程度因算法特征而不同,包括网络结构、能力、重量分布和跨层连接类型。技术不断涌现,以高效方式培训稀疏神经网络,这些网络可能具有活性聚度、四分化和消化噪音。在本文中,我们介绍了设计空间探索(DSE)扩展方法,以量化各种网络结构密密密和稀少的绘图计划的效益和局限性。虽然连通性增加的动力消耗量较少,而且往往在提取本地特征时优化,但与密集的绘图计划相比,在压强的RRAM阵列上,其性能可能更易受到光度下产生的噪音的影响。此外,我们介绍了一项案例研究,对引入1-易变-1-Restitor(1T1R)的典型非理想空间探索(DS-Restidledledal Memrist)平板结构的模块和CFRARIS-10号跨式结构的大小。