There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network $\textit{one layer at a time}$ with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, $\textit{do they have a closed-form solution?}$ Second, $\textit{how do we know when to stop adding more layers?}$ This work proves that the kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the $\textit{Neural Indicator Kernel}$.
翻译:目前,神经科学界正在就大脑进行反向适应的可能性展开辩论。为了更好地模仿大脑,建议用“单一前方通道”来替代“单一前方通道”来培训一个网络$\ textit{one district $@ a time}。 我们将这些网络称为“无层网络”网络。 我们继续通过回答两个未决问题来研究分层网络。 首先, $\ textit{ 是否具有封闭式解决方案?} 第二, $\ textit{ 我们如何知道何时停止添加更多层?} $\ textit{ 我们如何知道何时停止添加更多的层?} $ 这项工作证明内嵌入是封闭式重量,能够实现网络的全球最佳功能,同时驱动这些网络走向高度可取的分类内核; 我们称之为$\ text{ Neural指标 Kern}。