In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network (ANN) are typically updated simultaneously using a central processor. Here we investigate the feasibility and effect of asynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between asynchronicity and mini-batching in stochastic gradient descent, and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.
翻译:在神经网络中,突触利用本地信息逐个更新,允许完全分散的学习。相反,人工神经网络(ANN)的元素通常使用中央处理器同时更新。在这里,我们调查最近引入的分散的物理驱动学习网络中的非同步学习的可行性和影响。我们显示,在理想化模拟中,脱节学习过程不会降低各种任务的业绩。在实验中,脱节实际上通过让系统更好地探索离散的解决方案空间而改善性能。我们用随机同步性和小比喻在随机梯度梯度下下降,并显示它们对学习过程具有类似的影响。不同步学习过程使物理驱动学习网络成为真正分布齐全的学习机器,促进更好的性能和可扩展的部署。