Back-propagation (BP) is costly to implement in hardware and implausible as a learning rule implemented in the brain. However, BP is surprisingly successful in explaining neuronal activity patterns found along the cortical processing stream. We propose a locally implementable, unsupervised learning algorithm, CLAPP, which minimizes a simple, layer-specific loss function, and thus does not need to back-propagate error signals. The weight updates only depend on state variables of the pre- and post-synaptic neurons and a layer-wide third factor. Networks trained with CLAPP build deep hierarchical representations of images and speech.
翻译:以硬件和不可信的方式实施后方测量术(BP)的成本很高,这是在大脑中执行的一项学习规则。然而,BP在解释沿皮层处理流发现的神经活动模式方面却出人意料地取得了成功。 我们提出了一种可在当地实施、不受监督的学习算法(CLAPP),即CLAPP,该算法可以最大限度地减少一个简单的、分层的损失功能,因此不需要后方测量错误信号。重量更新仅取决于合成前和后神经元的状态变量和整个层的第三个因素。 接受CLAPP培训的网络可以对图像和语言进行深度的分层描述。