The vicinal risk minimization (VRM) principle is an empirical risk minimization (ERM) variant that replaces Dirac masses with vicinal functions. There is strong numerical and theoretical evidence showing that VRM outperforms ERM in terms of generalization if appropriate vicinal functions are chosen. Mixup Training (MT), a popular choice of vicinal distribution, improves the generalization performance of models by introducing globally linear behavior in between training examples. Apart from generalization, recent works have shown that mixup trained models are relatively robust to input perturbations/corruptions and at the same time are calibrated better than their non-mixup counterparts. In this work, we investigate the benefits of defining these vicinal distributions like mixup in latent space of generative models rather than in input space itself. We propose a new approach - \textit{VarMixup (Variational Mixup)} - to better sample mixup images by using the latent manifold underlying the data. Our empirical studies on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that models trained by performing mixup in the latent manifold learned by VAEs are inherently more robust to various input corruptions/perturbations, are significantly better calibrated, and exhibit more local-linear loss landscapes.
翻译:昆虫风险最小化(VRM)原则是一种实验性风险最小化(ERM)变体,它取代了Dirac质量,代之以昆虫功能。有强有力的数字和理论证据表明,如果选择适当的昆虫功能,VRM在一般化方面优于机构风险管理。混合培训(MT)是流行的昆虫分布的一种选择,它通过在培训实例中引入全球线性行为,提高了模型的通用性能。除了概括化外,最近的工作表明,经过培训的混合模型对于输入扰动/腐败比较强大,同时比非混合功能对等模型进行更好的校准。在这项工作中,我们调查了在基因化模型潜在空间而不是投入空间中混合等这些昆虫分布定义的优点。我们提出了一种新的方法 -\ textit{VarMixupupupup(Variational Mixupupupup ) - 通过利用潜层层数据更好地混合图像。我们关于CIFAR-10、CIFAR-100和Tiy-ImageNet的经验性研究显示,通过对各种深层的模型进行更牢固的模型进行更强有力的模拟化的模拟模拟的模拟模拟的模拟的学习,这些模型是更精化的模型,通过在各种的模拟的模型上进行更精化的模拟的模拟的模拟的模拟的模拟的模拟的模拟的模拟的演示。