This paper evaluates XGboost's performance given different dataset sizes and class distributions, from perfectly balanced to highly imbalanced. XGBoost has been selected for evaluation, as it stands out in several benchmarks due to its detection performance and speed. After introducing the problem of fraud detection, the paper reviews evaluation metrics for detection systems or binary classifiers, and illustrates with examples how different metrics work for balanced and imbalanced datasets. Then, it examines the principles of XGBoost. It proposes a pipeline for data preparation and compares a Vanilla XGBoost against a random search-tuned XGBoost. Random search fine-tuning provides consistent improvement for large datasets of 100 thousand samples, not so for medium and small datasets of 10 and 1 thousand samples, respectively. Besides, as expected, XGBoost recognition performance improves as more data is available, and deteriorates detection performance as the datasets become more imbalanced. Tests on distributions with 50, 45, 25, and 5 percent positive samples show that the largest drop in detection performance occurs for the distribution with only 5 percent positive samples. Sampling to balance the training set does not provide consistent improvement. Therefore, future work will include a systematic study of different techniques to deal with data imbalance and evaluating other approaches, including graphs, autoencoders, and generative adversarial methods, to deal with the lack of labels.
翻译:Translated abstract:
本文评估了XGBoost在不同数据集大小和类别分布方案下的性能,从完全平衡到高度不平衡。本次选择评估XGBoost是因为它在多项基准测试中由于其检测性能和速度而脱颖而出。在介绍欺诈检测问题后,本文回顾了检测系统或二元分类器的评估指标,并且通过例子展示了如何使用不同指标来处理平衡和不平衡数据集。然后,文章探讨了XGBoost的原理。本文提出了数据准备的流程,并将Vanilla XGBoost与经过随机搜索调整的XGBoost进行了比较。随机搜索调整提供了一致的改进,适用于大型数据集(10万个样本),但对于中等和小型数据集(10个和1,000个样本)则不然。此外,如预期的那样,XGBoost的性能随着更多数据的加入而得到提高,在数据集越来越不平衡的情况下,检测性能会下降。在分别包含50%、45%、25%和5%正样本的分布上的测试显示,检测性能的最大下降出现在仅含5%正样本的分布中。通过采样平衡训练集并不能提供一致的改进。因此,未来的工作将包括对处理数据不平衡的不同技术进行系统研究,并评估其他方法,包括图形、自编码器和生成对抗网络方法来处理缺少标签的数据。