This paper reports a comprehensive study on the applicability of ultra-scaled ferroelectric FinFETs with 6 nm thick hafnium zirconium oxide layer for neuromorphic computing in the presence of process variation, flicker noise, and device aging. An intricate study has been conducted about the impact of such variations on the inference accuracy of pre-trained neural networks consisting of analog, quaternary (2-bit/cell) and binary synapse. A pre-trained neural network with 97.5% inference accuracy on the MNIST dataset has been adopted as the baseline. Process variation, flicker noise, and device aging characterization have been performed and a statistical model has been developed to capture all these effects during neural network simulation. Extrapolated retention above 10 years have been achieved for binary read-out procedure. We have demonstrated that the impact of (1) retention degradation due to the oxide thickness scaling, (2) process variation, and (3) flicker noise can be abated in ferroelectric FinFET based binary neural networks, which exhibits superior performance over quaternary and analog neural network, amidst all variations. The performance of a neural network is the result of coalesced performance of device, architecture and algorithm. This research corroborates the applicability of deeply scaled ferroelectric FinFETs for non-von Neumann computing with proper combination of architecture and algorithm.
翻译:本文报告了一项关于超规模铁电 FinFET 6 nm 厚 ⁇ 氧化 ⁇ 层的超规模铁电 FinFET 在工艺变异、闪烁噪音和装置老化的情况下对神经形态计算的适用性的全面研究,对此类变异对模拟、四硝基(2位/细胞)和二进制突触等试验前神经网络的推导精度的影响进行了复杂的研究。一个在MNIST数据集中具有97.5%的推导精度的预先训练神经网络已被采纳为基线。已经进行了工艺变异、闪烁噪音和装置的老化特性分析,并开发了一个统计模型,以便在神经网络模拟期间捕捉所有这些影响。对于二进制读出程序而言,已经实现了10年以上的外推保留。我们已经证明:(1) 氧化物厚厚度缩放、(2)流程变异和(3) 闪亮噪音的组合神经网络网络,其表现优于四进制神经神经网络和模拟网络,这种稳定的网络结构的性能与正常的硬化结构。