Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets. However, they can be prohibitively expensive to apply in deep learning because they depend on feature representations that need to be learned. In this work, we show that we can greatly improve the computational efficiency by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model, using smaller architectures, and training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signals for data selection. We evaluate this "selection via proxy" (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error (often within 0.1%). For core-set selection on CIFAR10, proxies that are over 10x faster to train than their larger, more accurate targets can remove up to 50% of the data without harming the final accuracy of the target, leading to a 1.6x end-to-end training time improvement.
翻译:数据选择方法,例如积极学习和核心集选择,是机器学习大型数据集的有用工具。然而,在深层学习中,它们可能极其昂贵,因为取决于需要学习的特征表现,所以应用起来的费用太高,因为它们取决于需要学习的特征表现。在这项工作中,我们表明,我们可以通过使用一个小型代用模型来进行数据选择,从而大大提高计算效率(例如,选择数据点作为积极学习的标签)。通过从目标模型中去除隐藏的层,使用较小的结构,培训较少的区块,我们创造出数量级级更高的代理模型。虽然这些小型代用模型有更高的错误率,但我们发现它们以经验方式为数据选择提供了有用的信号。我们评估了在五个数据集中的若干数据选择任务“通过代理选择” (SVP) 方法: CIRFAR10, CIFAR100, 图像网, 亚马逊审查极度, 和亚马逊审查。为了积极学习,应用SVP可以在数据选择的运行时间(即它需要反复培训和选择的点)中,而不是反复培训和选择质量目标,在50-FARx最后的精确度上,可以比其最后目标更快的10。