Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
翻译:神经网络已经使人工智能领域发生革命,并给几乎所有科学领域和产业引入了变革应用。然而,这一成功的代价是高昂的;培训先进模型的能源需求是不可持续的。解决这一紧迫问题的一个有希望的方法是开发能直接支持算法要求的低能神经形态硬件。 内在的非挥发性、非线性和脊柱装置的记忆使得它们吸引了神经形态装置的候选。 我们在这里集中关注储油层计算模式,这是一个经常性网络,有一个简单的培训算法,适合用脊柱装置进行计算,因为它们能够提供非线性和记忆的特性。我们审查开发神经形态脊柱装置的技术和方法,并在这种装置被广泛使用之前以关键的开放问题来结束。