As its width tends to infinity, a deep neural network's behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK parametrization). However, we show that the standard and NTK parametrizations of a neural network do not admit infinite-width limits that can learn features, which is crucial for pretraining and transfer learning such as with BERT. We propose simple modifications to the standard parametrization to allow for feature learning in the limit. Using the *Tensor Programs* technique, we derive explicit formulas for such limits. On Word2Vec and few-shot learning on Omniglot via MAML, two canonical tasks that rely crucially on feature learning, we compute these limits exactly. We find that they outperform both NTK baselines and finite-width networks, with the latter approaching the infinite-width feature learning performance as width increases. More generally, we classify a natural space of neural network parametrizations that generalizes standard, NTK, and Mean Field parametrizations. We show 1) any parametrization in this space either admits feature learning or has an infinite-width training dynamics given by kernel gradient descent, but not both; 2) any such infinite-width limit can be computed using the Tensor Programs technique. Code for our experiments can be found at github.com/edwardjhu/TP4.
翻译:由于神经网络的宽度趋向于不精确,深神经网络在梯度下降下的行为可以简化和可预测(例如NTK ),如果它适当化(例如NTK 参数化)。然而,我们表明,神经网络的标准和NTK参数化并不会接受无限宽度的限制,而这些限制对于培训前和转让学习(例如与BERT)至关重要。我们建议简单修改标准平衡法,以便能够在限制范围内学习特征。使用 * 传感器程序* 技术,我们为这些限制制定明确的公式。在Word2Vec上,通过MAML在Omniglot上略微小的学习,两项关键依赖特征学习的卡通性任务,我们精确地计算这些限制。我们发现,它们超越了NTK基线和有限宽度网络,而后者则接近无限宽度特征化,在宽度增加时学习功能化。更普遍地,我们用自然空间空间的轨迹化方法,在Stencial-TRI 中可以显示一个通用的硬度技术特征。