There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network \textit{one layer at a time} with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, $\textit{do they have a closed-form solution?}$ Second, $\textit{how do we know when to stop adding more layers?}$ This work proves that the Kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the $\textit{Neural Indicator Kernel}$.
翻译:目前,神经科学界正在就大脑反向呼吸的可能性展开辩论。为了更好地模仿大脑,只用一个“单前端通道”来培训一个网络\ textit{ 1 层,作为绕过BP的替代方案;我们将这些网络称为“从层到层”网络。我们继续通过回答两个未决问题来进行关于分层网络的工作。首先,$\ textit{它们是否有一个封闭式解决方案?}美元,第二,$\ textit{我们如何知道何时停止添加更多的层?}$\ textit{我们如何知道呢?}$ 这项工作证明,内内尔中层嵌入是一种封闭式的重量,能够实现网络的全球最佳功能,同时使这些网络趋向于一个非常可取的分类内核;我们称之为$\ text{Neural Induction Kernel}。