Adversarial examples are inputs for machine learning models that have been designed by attackers to cause the model to make mistakes. In this paper, we demonstrate that adversarial examples can also be utilized for good to improve the performance of imbalanced learning. We provide a new perspective on how to deal with imbalanced data: adjust the biased decision boundary by training with Guiding Adversarial Examples (GAEs). Our method can effectively increase the accuracy of minority classes while sacrificing little accuracy on majority classes. We empirically show, on several benchmark datasets, our proposed method is comparable to the state-of-the-art method. To our best knowledge, we are the first to deal with imbalanced learning with adversarial examples.
翻译:反对立实例是攻击者设计机器学习模型的投入,目的是让模型出错。在本文中,我们证明对抗性实例也可以被很好地用来改善不平衡学习的绩效。我们为如何处理不平衡数据提供了一个新的视角:通过培训指导反向范例(GAE)来调整偏向决定界限。我们的方法可以有效地提高少数群体班级的准确性,同时在多数班级上牺牲很少的准确性。我们从经验上在几个基准数据集中显示,我们提出的方法可以与最新方法相提并论。据我们所知,我们首先处理对立实例的不平衡学习。