Deep neural network is the widely applied technology in this decade. In spite of the fruitful applications, the mechanism behind that is still to be elucidated. We study the learning process with a very simple supervised learning encoding problem. As a result, we found a simple law, in the training response, which describes neural tangent kernel. The response consists of a power law like decay multiplied by a simple response kernel. We can construct a simple mean-field dynamical model with the law, which explains how the network learns. In the learning, the input space is split into sub-spaces along competition between the kernels. With the iterated splits and the aging, the network gets more complexity, but finally loses its plasticity.
翻译:深神经网络是这十年中广泛应用的技术。 尽管有富有成效的应用, 其背后的机制仍有待于解释。 我们研究学习过程时有一个非常简单的监管学习编码问题。 结果,我们在培训反应中发现了一个简单的法律, 描述神经相近的内核。 答案包括一种权力法, 比如衰变乘以简单的响应内核。 我们可以用法律来构建一个简单的中位空间动态模型, 解释网络的学习方式。 在学习中, 输入空间被分割成子空间, 随内核之间的竞争。 随着迭代分裂和老化, 网络变得更加复杂, 但最终失去了其可塑性 。