Factorized layers--operations parameterized by products of two or more matrices--occur in a variety of deep learning contexts, including compressed model training, certain types of knowledge distillation, and multi-head self-attention architectures. We study how to initialize and regularize deep nets containing such layers, examining two simple, understudied schemes, spectral initialization and Frobenius decay, for improving their performance. The guiding insight is to design optimization routines for these networks that are as close as possible to that of their well-tuned, non-decomposed counterparts; we back this intuition with an analysis of how the initialization and regularization schemes impact training with gradient descent, drawing on modern attempts to understand the interplay of weight-decay and batch-normalization. Empirically, we highlight the benefits of spectral initialization and Frobenius decay across a variety of settings. In model compression, we show that they enable low-rank methods to significantly outperform both unstructured sparsity and tensor methods on the task of training low-memory residual networks; analogs of the schemes also improve the performance of tensor decomposition techniques. For knowledge distillation, Frobenius decay enables a simple, overcomplete baseline that yields a compact model from over-parameterized training without requiring retraining with or pruning a teacher network. Finally, we show how both schemes applied to multi-head attention lead to improved performance on both translation and unsupervised pre-training.
翻译:我们研究如何初始化和规范含有这种层的深网,研究两种简单的、未经充分研究的计划、光谱初始化和弗罗贝纽斯衰变,以提高其性能。指导见解是设计这些网络的优化常规,这些网络尽可能接近于其经过良好调整的、未分解的对口单位;我们支持这一直觉,分析初始化和正规化计划如何影响梯度下降的培训,借鉴现代尝试,以了解重度下降和批次正常化的相互作用。我们强调光谱初始化和弗罗贝纽斯衰变的好处,以改善其业绩。在模型压缩中,我们显示它们使低级别方法大大超越了不完善的师级紧张性和高压方法,在培训低级残余网络时,我们支持这一直觉,分析初始化和正规化计划如何影响梯度下降的培训,借鉴现代尝试来理解重量下降和批次正常化的相互作用。我们强调光谱初始化和Frobenius衰减的好处。在模型压缩中,我们显示它们使低级别方法大大超越了对低级师级的紧张性和高层次后再生化的系统化方法。