Data subset selection from a large number of training instances has been a successful approach toward efficient and cost-effective machine learning. However, models trained on a smaller subset may show poor generalization ability. In this paper, our goal is to design an algorithm for selecting a subset of the training data, so that the model can be trained quickly, without significantly sacrificing on accuracy. More specifically, we focus on data subset selection for L2 regularized regression problems and provide a novel problem formulation which seeks to minimize the training loss with respect to both the trainable parameters and the subset of training data, subject to error bounds on the validation set. We tackle this problem using several technical innovations. First, we represent this problem with simplified constraints using the dual of the original training problem and show that the objective of this new representation is a monotone and alpha-submodular function, for a wide variety of modeling choices. Such properties lead us to develop SELCON, an efficient majorization-minimization algorithm for data subset selection, that admits an approximation guarantee even when the training provides an imperfect estimate of the trained model. Finally, our experiments on several datasets show that SELCON trades off accuracy and efficiency more effectively than the current state-of-the-art.
翻译:从大量培训中挑选数据子集是高效和成本效益高的机器学习的成功方法,然而,在较小子组中培训的模型可能显示缺乏一般化能力。在本文中,我们的目标是设计一种算法,用于选择培训数据的一个子集,这样可以快速地培训模型,而不会大大牺牲准确性。更具体地说,我们侧重于L2正规回归问题的数据子集选择,并提供一种新的问题配方,力求最大限度地减少培训在可培训参数和培训数据子组方面的损失,但需视验证数据集的误差而定。我们利用若干技术创新来解决这个问题。首先,我们利用最初培训问题的双重性来简化限制我们代表了这个问题,并表明这一新代表制的目标是一个单体和甲型子体功能,用于各种各样的建模选择。这些属性引导我们开发SELCON,这是数据子组选择的一个高效的重集-最小化算法,它承认即使培训提供了对经过培训的模型的不准确估计,也是一种近似的保证。最后,我们对若干数据组的实验表明,SELCON目前贸易的准确性和效率比现状要低。