Deep learning, a multi-layered neural network approach inspired by the brain, has revolutionized machine learning. One of its key enablers has been backpropagation, an algorithm that computes the gradient of a loss function with respect to the weights in the neural network model, in combination with its use in gradient descent. However, the implementation of deep learning in digital computers is intrinsically wasteful, with energy consumption becoming prohibitively high for many applications. This has stimulated the development of specialized hardware, ranging from neuromorphic CMOS integrated circuits and integrated photonic tensor cores to unconventional, material-based computing systems. The learning process in these material systems, taking place, e.g., by artificial evolution or surrogate neural network modelling, is still a complicated and time-consuming process. Here, we demonstrate an efficient and accurate homodyne gradient extraction method for performing gradient descent on the loss function directly in the material system. We demonstrate the method in our recently developed dopant network processing units, where we readily realize all Boolean gates. This shows that gradient descent can in principle be fully implemented in materio using simple electronics, opening up the way to autonomously learning material systems.
翻译:深层学习是一种由大脑启发的多层次神经网络方法,它已经使机器的学习发生了革命性的变化。它的关键助推器之一是反向适应,这种算法计算了神经网络模型重量损失函数的梯度,同时将其用于梯度下降。然而,在数字计算机中进行深层学习本质上是浪费的,对许多应用而言,能源消耗变得令人望而却步。这刺激了专门硬件的开发,从神经形态的CMOS集成电路和集成光感应核到非常规的、基于材料的计算机系统。这些材料系统中的学习过程,例如人工进化或超导神经网络建模,仍然是一个复杂和耗时的过程。在这里,我们展示了一种高效和准确的同性梯度梯度提取方法,用于直接在材料系统的损失函数上进行梯度下降。我们展示了我们最近开发的depant网络处理器的方法,在那里我们很容易了解所有的Boolean门。这显示,在原则上,通过简单的电子学习系统,可以完全地在交替中进行。