The long-standing challenge of building effective classification models for small and imbalanced datasets has seen little improvement since the creation of the Synthetic Minority Over-sampling Technique (SMOTE) over 20 years ago. Though GAN based models seem promising, there has been a lack of purpose built architectures for solving the aforementioned problem, as most previous studies focus on applying already existing models. This paper proposes a unique, performance-oriented, data-generating strategy that utilizes a new architecture, coined draGAN, to generate both minority and majority samples. The samples are generated with the objective of optimizing the classification model's performance, rather than similarity to the real data. We benchmark our approach against state-of-the-art methods from the SMOTE family and competitive GAN based approaches on 94 tabular datasets with varying degrees of imbalance and linearity. Empirically we show the superiority of draGAN, but also highlight some of its shortcomings. All code is available on: https://github.com/LeonGuertler/draGAN.
翻译:自20多年前创建合成少数群体过抽样技术(SMOTE)以来,为小型和不平衡的数据集建立有效分类模型的长期挑战几乎没有什么改善。尽管基于GAN的模型似乎很有希望,但缺乏解决上述问题的目的结构,因为大多数以前的研究都侧重于应用已经存在的模型。本文件提出了一个独特的、面向业绩的、数据生成战略,利用已经创建的DRAGAN的新架构生成少数和多数样本。这些样本的生成目的是优化分类模型的性能,而不是与真实数据相似。我们根据SMOTE家族和竞争性GAN在94个表格数据集上采用的方法,以不同程度的不平衡和线性为基础,以最先进的方法衡量我们的方法。我们很生动地展示了DRAGAN的优势,但也强调了其中的一些缺点。所有代码都载于 https://github.com/LeonGuertler/draGAN。