Nowadays, many classification algorithms have been applied to various industries to help them work out their problems met in real-life scenarios. However, in many binary classification tasks, samples in the minority class only make up a small part of all instances, which leads to the datasets we get usually suffer from high imbalance ratio. Existing models sometimes treat minority classes as noise or ignore them as outliers encountering data skewing. In order to solve this problem, we propose a bagging ensemble learning framework $ASE$ (Anomaly Scoring Based Ensemble Learning). This framework has a scoring system based on anomaly detection algorithms which can guide the resampling strategy by divided samples in the majority class into subspaces. Then specific number of instances will be under-sampled from each subspace to construct subsets by combining with the minority class. And we calculate the weights of base classifiers trained by the subsets according to the classification result of the anomaly detection model and the statistics of the subspaces. Experiments have been conducted which show that our ensemble learning model can dramatically improve the performance of base classifiers and is more efficient than other existing methods under a wide range of imbalance ratio, data scale and data dimension. $ASE$ can be combined with various classifiers and every part of our framework has been proved to be reasonable and necessary.
翻译:目前,许多分类算法已应用于不同行业,以帮助他们解决在现实生活中遇到的问题。然而,在许多二进制分类任务中,少数类的样本只占所有案例的一小部分,导致我们通常得到的数据集的偏差率通常很高。现有的模型有时将少数群体类作为噪音处理,或视其为遇到数据扭曲的外围人物。为了解决这个问题,我们建议采用一个包装的混合学习框架$ASE$(异常分解基础学习学习)。这个框架有一个基于异常检测算法的评分系统,它能够引导多数类的样本在子空间中进行抽查战略的重新采。然后,从每个子空间中抽取的具体案例数量将低于标数,以便通过与少数群体类相结合来构建子集。我们根据异常检测模型和子空间统计的分类结果计算出基础分类师的重量。已经进行了实验,实验表明我们的混合学习模型可以极大地改进基础分类师和多数类的样本的性能,并且根据现有的标准框架,比其他标准都更有效率。