We present a subset selection algorithm designed to work with arbitrary model families in a practical batch setting. In such a setting, an algorithm can sample examples one at a time but, in order to limit overhead costs, is only able to update its state (i.e. further train model weights) once a large enough batch of examples is selected. Our algorithm, IWeS, selects examples by importance sampling where the sampling probability assigned to each example is based on the entropy of models trained on previously selected batches. IWeS admits significant performance improvement compared to other subset selection algorithms for seven publicly available datasets. Additionally, it is competitive in an active learning setting, where the label information is not available at selection time. We also provide an initial theoretical analysis to support our importance weighting approach, proving generalization and sampling rate bounds.
翻译:我们提出了一个子集选择算法,目的是在实际的批量环境下与任意模式家庭合作;在这种环境下,一种算法可以一次对一个实例进行抽样,但为了限制间接费用,只有在选择了足够数量的示例时,才能更新其状态(即进一步培训模型重量)。我们的算法IWES通过重要取样选择了实例,其中每个实例的抽样概率是以以前选定的批量所培训的模型的酶为基础的。IWES承认,与七个公开数据集的其他子集选择算法相比,业绩显著改进。此外,在积极的学习环境中,它具有竞争力,在选择时没有提供标签信息。我们还提供了初步理论分析,以支持我们的重要性加权方法,证明通用和抽样率界限。