Dataset distillation aims to learn a small synthetic dataset that preserves most of the information from the original dataset. Dataset distillation can be formulated as a bi-level meta-learning problem where the outer loop optimizes the meta-dataset and the inner loop trains a model on the distilled data. Meta-gradient computation is one of the key challenges in this formulation, as differentiating through the inner loop learning procedure introduces significant computation and memory costs. In this paper, we address these challenges using neural Feature Regression with Pooling (FRePo), achieving the state-of-the-art performance with an order of magnitude less memory requirement and two orders of magnitude faster training than previous methods. The proposed algorithm is analogous to truncated backpropagation through time with a pool of models to alleviate various types of overfitting in dataset distillation. FRePo significantly outperforms the previous methods on CIFAR100, Tiny ImageNet, and ImageNet-1K. Furthermore, we show that high-quality distilled data can greatly improve various downstream applications, such as continual learning and membership inference defense. Please check out our webpage at https://sites.google.com/view/frepo.
翻译:数据集蒸馏法旨在学习一个小型合成数据集,保存原始数据集的大部分信息; 数据集蒸馏法可以作为一种双级元学习问题,外部环优化元数据集,而内环则在蒸馏数据上培养模型。 元梯梯度计算是这一公式的主要挑战之一,因为通过内环学习程序进行区分,会带来巨大的计算和记忆成本。 在本文件中,我们利用神经特征回归和集合(Frepo)来应对这些挑战,实现最先进的技术性能,其数量级比记忆要求少,两个数量级比以往方法更快的培训。 提议的算法类似于通过一个模型库,通过时间对背面的重新调整进行细化,以缓解在数据集蒸馏中各种类型的过度调整。 Frepo 明显地超越了以前在 CIRAR100、 Tiny 图像网 和图像网络-1K 上采用的方法。 此外,我们表明,高质量的蒸馏数据可以大大改进各种下游应用,例如持续学习和成员防御。 请在 http://go/reviews上检查我们的网站。