The laborious process of labeling data often bottlenecks projects that aim to leverage the power of supervised machine learning. Active Learning (AL) has been established as a technique to ameliorate this condition through an iterative framework that queries a human annotator for labels of instances with the most uncertain class assignment. Via this mechanism, AL produces a binary classifier trained on less labeled data but with little, if any, loss in predictive performance. Despite its advantages, AL can have difficulty with class-imbalanced datasets and results in an inefficient labeling process. To address these drawbacks, we investigate our unsupervised instance selection (UNISEL) technique followed by a Random Forest (RF) classifier on 10 outlier detection datasets under low-label conditions. These results are compared to AL performed on the same datasets. Further, we investigate the combination of UNISEL and AL. Results indicate that UNISEL followed by an RF performs comparably to AL with an RF and that the combination of UNISEL and AL demonstrates superior performance. The practical implications of these findings in terms of time savings and generalizability afforded by UNISEL are discussed.
翻译:通过这个机制,AL生产了一个二进制分类器,在标签较少但预测性能损失不大的情况下,用较少的标签数据进行训练。尽管它有其优点,AL可能难以使用分类平衡的数据集,并导致标签程序效率低下。为了解决这些缺陷,我们调查了我们未经监督的实例选择技术(UNISEL),然后是随机森林分类器,在低标签条件下对10个外部检测数据集进行分类。这些结果与AL在同一数据集上进行的比较。此外,我们调查UNISEL和AL的组合。结果显示,UNISEL的组合是FRA,其后是FRA与AL的比较性能,UNISEL和AL的组合显示了优异性。