In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network (ANN) are typically updated simultaneously using a central processor. Here we investigate the feasibility and effect of desynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between desynchronization and mini-batching in stochastic gradient descent, and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.
翻译:在神经网络中,利用本地信息单独更新突触,允许完全分散的学习。相反,人工神经网络(ANN)的元素通常使用中央处理器同时更新。在这里,我们调查最近引入的分散的物理学驱动的学习网络中的脱同步学习的可行性和影响。我们显示,在理想化模拟中,脱同步过程不会降低各种任务的业绩。在实验中,脱同步实际上通过让系统更好地探索离散的解决方案空间来改善性能。我们把脱同步和小型脱钩在随机梯度下降中进行比喻,并显示它们对学习过程具有类似的影响。脱同步学习过程将物理学驱动的学习网络建立为真正分布齐全的学习机器,促进更好的性能和可扩展的部署。