One-shot coreset selection aims to select a representative subset of the training data, given a pruning rate, that can later be used to train future models while retaining high accuracy. State-of-the-art coreset selection methods pick the highest importance examples based on an importance metric and are found to perform well at low pruning rates. However, at high pruning rates, they suffer from a catastrophic accuracy drop, performing worse than even random sampling. This paper explores the reasons behind this accuracy drop both theoretically and empirically. We first propose a novel metric to measure the coverage of a dataset on a specific distribution by extending the classical geometric set cover problem to a distribution cover problem. This metric helps explain why coresets selected by SOTA methods at high pruning rates perform poorly compared to random sampling because of worse data coverage. We then propose a novel one-shot coreset selection method, Coverage-centric Coreset Selection (CCS), that jointly considers overall data coverage upon a distribution as well as the importance of each example. We evaluate CCS on five datasets and show that, at high pruning rates (e.g., 90%), it achieves significantly better accuracy than previous SOTA methods (e.g., at least 19.56% higher on CIFAR10) as well as random selection (e.g., 7.04% higher on CIFAR10) and comparable accuracy at low pruning rates. We make our code publicly available at https://github.com/haizhongzheng/Coverage-centric-coreset-selection.
翻译:简单的核心选择旨在选择培训数据中具有代表性的子集, 给出一个修剪率, 以后可以用于培训未来模型, 同时保留高精确度。 最先进的核心设置选择方法根据一个重要度量选择最重要的例子, 并发现其表现优于低的修剪率。 但是, 在高修剪裁率下, 他们遭受灾难性的精度下降, 比随机采样还要差。 本文探索了这种精确度在理论上和经验上下降的原因。 我们首先提出了一个新颖的衡量标准, 通过将经典的更高几何数据集覆盖问题扩大到分布覆盖问题来衡量特定分布数据集的覆盖面。 这个衡量方法有助于解释为什么SOTA方法选择的高修剪裁率与随机采样相比表现差的原因, 因为数据覆盖率更差。 我们然后提出一个新的一发式核心选择方法, 以覆盖为中心核心选择( CCSC), 共同考虑分布时的总体数据覆盖面以及每个例子的重要性。 我们评估了五个数据集的CC, 并显示, 在高钻率( e. g. 90, com- com com com recrealalalalalal as) at at prearearealal at at preareareality at.</s>