A neural network regularizer (e.g., weight decay) boosts performance by explicitly penalizing the complexity of a network. In this paper, we penalize inferior network activations -- feature embeddings -- which in turn regularize the network's weights implicitly. We propose singular value maximization (SVMax) to learn a more uniform feature embedding. The SVMax regularizer supports both supervised and unsupervised learning. Our formulation mitigates model collapse and enables larger learning rates. We evaluate the SVMax regularizer using both retrieval and generative adversarial networks. We leverage a synthetic mixture of Gaussians dataset to evaluate SVMax in an unsupervised setting. For retrieval networks, SVMax achieves significant improvement margins across various ranking losses. Code available at https://bit.ly/3jNkgDt
翻译:神经网络常规化( 如, 体重衰减) 通过明确惩罚网络的复杂性来提升性能。 在本文中, 我们惩罚低级网络启动( 特征嵌入) -- -- 特征嵌入) -- -- 这反过来又隐含地规范网络的重量。 我们提议单值最大化( SVMAx) 以学习更统一的嵌入特性。 SVMAx 常规化支持有监督的和不受监督的学习。 我们的配方可以减轻模型崩溃, 并允许更大的学习率。 我们用检索和基因对抗网络来评估 SVMAx 常规化。 我们利用高斯人数据集的合成混合物来在不受监督的环境中评估 SVMAx 。 对于检索网络, SVMAx 可以在各种排序损失中实现显著的改善幅度 。 代码可在 https://bit. ly/3jNkgDt 上查阅 。