State-of-the-art backpropagation-free learning methods employ local error feedback to direct iterative optimisation via gradient descent. Here, we examine the more restrictive setting where retrograde communication from neuronal outputs is unavailable for pre-synaptic weight optimisation. We propose Forward Projection (FP), a randomised closed-form training method requiring only a single forward pass over the dataset without retrograde communication. FP generates target values for pre-activation membrane potentials through randomised nonlinear projections of pre-synaptic inputs and labels. Local loss functions are optimised using closed-form regression without feedback from downstream layers. A key advantage is interpretability: membrane potentials in FP-trained networks encode information interpretable layer-wise as label predictions. Across several biomedical datasets, FP achieves generalisation comparable to gradient descent-based local learning methods while requiring only a single forward propagation step, yielding significant training speedup. In few-shot learning tasks, FP produces more generalisable models than backpropagation-optimised alternatives, with local interpretation functions successfully identifying clinically salient diagnostic features.


翻译:当前最先进的无反向传播学习方法采用局部误差反馈通过梯度下降指导迭代优化。本文研究了一种更为受限的场景:神经元输出无法提供突触前权重优化所需的逆向通信。我们提出正向投影(FP),一种随机化闭式训练方法,仅需对数据集进行单次前向传播且无需逆向通信。FP通过突触前输入与标签的随机非线性投影,生成突触前膜电位激活前的目标值。局部损失函数通过闭式回归优化,无需下游层的反馈。该方法的一个关键优势在于可解释性:FP训练网络中的膜电位逐层编码可解释为标签预测的信息。在多个生物医学数据集上,FP实现了与基于梯度下降的局部学习方法相当的泛化性能,同时仅需单次前向传播步骤,显著提升了训练速度。在少样本学习任务中,FP构建的模型比反向传播优化方法具有更强的泛化能力,其局部解释函数能有效识别临床显著的诊断特征。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员