The increasing availability of massive data sets poses a series of challenges for machine learning. Prominent among these is the need to learn models under hardware or human resource constraints. In such resource-constrained settings, a simple yet powerful approach is to operate on small subsets of the data. Coresets are weighted subsets of the data that provide approximation guarantees for the optimization objective. However, existing coreset constructions are highly model-specific and are limited to simple models such as linear regression, logistic regression, and $k$-means. In this work, we propose a generic coreset construction framework that formulates the coreset selection as a cardinality-constrained bilevel optimization problem. In contrast to existing approaches, our framework does not require model-specific adaptations and applies to any twice differentiable model, including neural networks. We show the effectiveness of our framework for a wide range of models in various settings, including training non-convex models online and batch active learning.
翻译:大量数据集的日益普及给机器学习带来了一系列挑战,其中突出的是需要在硬件或人力资源制约下学习模型。在这种资源受限制的环境中,简单而有力的方法是在数据的小子集上操作。核心数据集是数据加权子集,为优化目标提供近似保证。然而,现有的核心数据集构造非常针对具体模型,仅限于线性回归、物流回归和美元等简单模型。在这项工作中,我们提议了一个通用的核心数据集构建框架,将核心集选择设计成一个受限制的基点双级优化问题。与现有方法不同,我们的框架不需要针对模型的适应,而是适用于任何两种不同的模型,包括神经网络。我们展示了我们框架在各种环境下对多种模型的有效性,包括在线培训和批量积极学习的非凝固模型。