Despite being highly over-parametrized, and having the ability to fully interpolate the training data, deep networks are known to generalize well to unseen data. It is now understood that part of the reason for this is that the training algorithms used have certain implicit regularization properties that ensure interpolating solutions with "good" properties are found. This is best understood in linear over-parametrized models where it has been shown that the celebrated stochastic gradient descent (SGD) algorithm finds an interpolating solution that is closest in Euclidean distance to the initial weight vector. Different regularizers, replacing Euclidean distance with Bregman divergence, can be obtained if we replace SGD with stochastic mirror descent (SMD). Empirical observations have shown that in the deep network setting, SMD achieves a generalization performance that is different from that of SGD (and which depends on the choice of SMD's potential function. In an attempt to begin to understand this behavior, we obtain the generalization error of SMD for over-parametrized linear models for a binary classification problem where the two classes are drawn from a Gaussian mixture model. We present simulation results that validate the theory and, in particular, introduce two data models, one for which SMD with an $\ell_2$ regularizer (i.e., SGD) outperforms SMD with an $\ell_1$ regularizer, and one for which the reverse happens.
翻译:尽管培训数据高度偏差,而且有能力对培训数据进行充分内插,但深层网络已知能够对隐蔽数据进行广泛概括。现在人们理解,部分原因有:所使用的培训算法具有某些隐含的正规化特性,确保找到“好”属性的内插解决方案。这在线性超平衡模型中最能理解,因为人们已经证明,著名的随机偏差梯底部算法发现一种在欧cliidean距离最接近初始重量矢量的内插解决方案。如果用SGD取代欧clidean距离和布雷格曼差异,那么可以取得不同的管理器。如果我们用SMDSM(SM)来取代SGD,那么在深度网络设置中,SMD可以取得与SGD不同的概括性表现(这取决于SMD的潜在功能的选择)。为了开始理解这一行为,我们得到了SMD(SMD)的普遍错误, 以偏差线性线性模型取代了Bregman的经常差异。如果我们用SMD(S) 和SGM(S) 两种模型来模拟,用SB(S) 的模型来模拟,用SB(S) 和SB(S) 这样的两个模型来模拟, 。