Recent research shows that the dynamics of an infinitely wide neural network (NN) trained by gradient descent can be characterized by Neural Tangent Kernel (NTK) \citep{jacot2018neural}. Under the squared loss, the infinite-width NN trained by gradient descent with an infinitely small learning rate is equivalent to kernel regression with NTK \citep{arora2019exact}. However, the equivalence is only known for ridge regression currently \citep{arora2019harnessing}, while the equivalence between NN and other kernel machines (KMs), e.g. support vector machine (SVM), remains unknown. Therefore, in this work, we propose to establish the equivalence between NN and SVM, and specifically, the infinitely wide NN trained by soft margin loss and the standard soft margin SVM with NTK trained by subgradient descent. Our main theoretical results include establishing the equivalences between NNs and a broad family of $\ell_2$ regularized KMs with finite-width bounds, which cannot be handled by prior work, and showing that every finite-width NN trained by such regularized loss functions is approximately a KM. Furthermore, we demonstrate our theory can enable three practical applications, including (i) \textit{non-vacuous} generalization bound of NN via the corresponding KM; (ii) \textit{non-trivial} robustness certificate for the infinite-width NN (while existing robustness verification methods would provide vacuous bounds); (iii) intrinsically more robust infinite-width NNs than those from previous kernel regression. Our code for the experiments is available at \url{https://github.com/leslie-CH/equiv-nn-svm}.
翻译:最近的研究显示,由梯度下移所训练的无限宽度神经网络(NN)的动态,只有峰值回归(citep{arora2019exact})才能知道,而NN和其他核心机(KM)之间的等值,例如支持矢量机(SVM),在平方位损失下,由梯度下降所训练的无限宽度NNN(NNN)与梯度下降(NNNN)之间的等值(NTK)和其他核心机(NTKM)的等值。因此,在平方位损失中,我们提议在NNT和SVM(P)之间建立无限宽度回归(NT),而NT(NV)的等值(NTR)/NVC(NV)的等值(NT) 和以正值正常值2美元(KMM(K)的等值(KM)之间的等值等值(NNV),我们通过定期的变值(NM-l),我们无法通过定期的计算,这些是定期的。