Understanding how the brain learns can be advanced by investigating biologically plausible learning rules -- those that obey known biological constraints, such as locality, to serve as valid brain learning models. Yet, many studies overlook the role of architecture and initial synaptic connectivity in such models. Building on insights from deep learning, where initialization profoundly affects learning dynamics, we ask a key but underexplored neuroscience question: how does initial synaptic connectivity shape learning in neural circuits? To investigate this, we train recurrent neural networks (RNNs), which are widely used for brain modeling, with biologically plausible learning rules. Our findings reveal that initial weight magnitude significantly influences the learning performance of such rules, mirroring effects previously observed in training with backpropagation through time (BPTT). By examining the maximum Lyapunov exponent before and after training, we uncovered the greater demands that certain initialization schemes place on training to achieve desired information propagation properties. Consequently, we extended the recently proposed gradient flossing method, which regularizes the Lyapunov exponents, to biologically plausible learning and observed an improvement in learning performance. To our knowledge, we are the first to examine the impact of initialization on biologically plausible learning rules for RNNs and to subsequently propose a biologically plausible remedy. Such an investigation can lead to neuroscientific predictions about the influence of initial connectivity on learning dynamics and performance, as well as guide neuromorphic design.
翻译:暂无翻译