Many learning algorithms used as normative models in neuroscience or as candidate approaches for learning on neuromorphic chips learn by contrasting one set of network states with another. These Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics that could limit the range of physical systems capable of harnessing CL. In this study, we build on recent work exploring how CL might be implemented by biological or neurmorphic systems and show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed. Thanks to a set of general theorems corroborated by numerical experiments across several CL models, our results provide theoretical foundations for the study and development of CL methods for biological and neuromorphic neural networks.
翻译:许多学习算法被用作神经科学的规范性模型,或作为通过将一组网络状态与另一组网络状态作对比来学习神经定态芯片的候选方法。这些矛盾学习算法传统上采用僵硬的、时间上非本地的和定期的学习动态来实施,这可能会限制能够利用CL的物理系统的范围。 在本研究中,我们以最近的工作为基础,探索生物系统或神经定态系统如何实施CL,并表明这种形式的学习可以暂时本地化,即使标准培训程序的许多动态要求有所放松,仍然可以发挥作用。由于一套通用理论得到了若干CL模型数字实验的证实,我们的成果为生物和神经定态神经网络的CL方法的研发提供了理论基础。</s>