This paper introduces a novel simulation tool for analyzing and training neural network models tailored for compute-in-memory hardware. The tool leverages physics-based device models to enable the design of neural network models and their parameters that are more hardware-accurate. The initial study focused on modeling a CMOS-based floating-gate transistor and memristor device using measurement data from a fabricated device. Additionally, the tool incorporates hardware constraints, such as the dynamic range of data converters, and allows users to specify circuit-level constraints. A case study using the MNIST dataset and LeNet-5 architecture demonstrates the tool's capability to estimate area, power, and accuracy. The results showcase the potential of the proposed tool to optimize neural network models for compute-in-memory hardware.
翻译:本文介绍一种用于分析和训练神经网络模型的新型模拟工具,该工具设计用于计算内存硬件。该工具利用基于物理的设备模型,以在更加硬件准确的情况下设计神经网络模型及其参数。最初的研究集中在使用来自制造设备的测量数据来对CMOS浮门晶体管和膜电阻器件进行建模。此外,该工具结合硬件限制,例如数据转换器的动态范围,并允许用户指定电路级限制。使用MNIST数据集和LeNet-5架构的案例研究展示了该工具估计面积、功率和准确性的能力。结果展示了提出的工具优化计算内存硬件的神经网络模型的潜力。