Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data. Recently, dynamic mixup methods have improved previous static policies effectively (e.g., linear interpolation) by maximizing salient regions or maintaining the target in mixed samples. The discrepancy is that the generated mixed samples from dynamic policies are more instance discriminative than the static ones, e.g., the foreground objects are decoupled from the background. However, optimizing mixup policies with dynamic methods in input space is an expensive computation compared to static ones. Hence, we are trying to transfer the decoupling mechanism of dynamic methods from the data level to the objective function level and propose the general decoupled mixup (DM) loss. The primary effect is that DM can adaptively focus on discriminative features without losing the original smoothness of the mixup while avoiding heavy computational overhead. As a result, DM enables static mixup methods to achieve comparable or even exceed the performance of dynamic methods. This also leads to an interesting objective design problem for mixup training that we need to focus on both smoothing the decision boundaries and identifying discriminative features. Extensive experiments on supervised and semi-supervised learning benchmarks across seven classification datasets validate the effectiveness of DM by equipping it with various mixup methods.
翻译:一种有效的数据增强方法,通过将决定边界与混合数据平滑,改善神经网络的总体化。最近,动态混合方法通过尽量扩大突出区域或维持混合样本的目标,有效地改进了先前静态政策(如线性内插),使突出区域最大化或维持混合样本的目标。差异在于,动态政策产生的混合样本比静态样本更具实例性,例如,前台天体与背景脱钩。然而,优化与输入空间中动态方法的混合政策,与静态方法相比,是一种昂贵的计算方法。因此,我们正试图将动态方法脱钩机制从数据级别转移到目标功能级别,并提出一般脱钩混合(DM)损失。主要效果是,DM能够适应性地侧重于歧视特征,同时不丧失混合的原始平稳性,同时避免沉重的计算间接费用。结果是,DM能够使静态混合方法实现可比的甚至超过动态方法的性能。这还导致一个有趣的客观设计问题,即将动态方法的混合培训从数据级别转移到目标层面,我们需要既注重平稳地对DM决定边界进行监管,又通过分析性测试,从而将数据进行跨度测试。