Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of technical applications by explicitly encoding symmetries, such as rotations and permutations, in their architectures. Although the success of G-CNNs is driven by their \emph{explicit} symmetry bias, a recent line of work has proposed that the \emph{implicit} bias of training algorithms on particular architectures is key to understanding generalization for overparameterized neural nets. In this context, we show that $L$-layer full-width linear G-CNNs trained via gradient descent for binary classification converge to solutions with low-rank Fourier matrix coefficients, regularized by the $2/L$-Schatten matrix norm. Our work strictly generalizes previous analysis on the implicit bias of linear CNNs to linear G-CNNs over all finite groups, including the challenging setting of non-commutative groups (such as permutations), as well as band-limited G-CNNs over infinite groups. We validate our theorems via experiments on a variety of groups, and empirically explore more realistic nonlinear networks, which locally capture similar regularization patterns. Finally, we provide intuitive interpretations of our Fourier space implicit regularization results in real space via uncertainty principles.
翻译:G-CNNs(G-CNNs)是进化神经网络(C-CNNs)的常规,这些神经网络在广泛的技术应用中通过在其结构中明确编码对称性(如旋转和变异)而优于广泛的技术应用。虽然G-CNNs的成败是由它们的对称性偏差驱动的,但最近的一行工作表明,特定结构中培训算法的隐含偏差是理解超分度神经网一般化的关键。在这方面,我们表明,通过梯度下降为二进制分类而培训的L$-层全维线G-CNs(全线全线G-CNs)模式与低端的四面基矩阵系数(按2/L$-Exclectright symall)的解决方案相融合。我们的工作严格概括了以前对线性CNs对线性网络和线性G-CNs的隐含偏向性偏向性分析,包括具有挑战性的超度组合(例如透视等),我们通过平面G-rolalalalalalalal exalalalalalalal comm (我们通过G) exal-roalalal-roal-roal-rodustral-rolationslusluslationslationslusl),最后通过G-bal-bal-bal-bal-bal-slviolviolviolviolviewslusmalslusionsl)提供了一种对了我们空间网络的不的不具有的不现实的系统和透式的系统,最后对等的、通过G-cal-G-G-cal-cal-rocal-rocal-rocal-rocal-rocal-rocal-rocal-rocal-rocal-G-robal-rocal-slbal-slbal-slbal-rocal-rocal-rocal-rocal-rocal-robal-smsmsmsmsmsmsmsmsmsmslal-s。我们通过G-ssml),我们制的不等的不