Dataset distillation aims to generate small datasets with little information loss as large-scale datasets for reducing storage and training costs. Recent state-of-the-art methods mainly constrain the sample generation process by matching synthetic images and the original ones regarding gradients, embedding distributions, or training trajectories. Although there are various matching objectives, currently the method for selecting original images is limited to naive random sampling. We argue that random sampling inevitably involves samples near the decision boundaries, which may provide large or noisy matching targets. Besides, random sampling cannot guarantee the evenness and diversity of the sample distribution. These factors together lead to large optimization oscillations and degrade the matching efficiency. Accordingly, we propose a novel matching strategy named as \textbf{D}ataset distillation by \textbf{RE}present\textbf{A}tive \textbf{M}atching (DREAM), where only representative original images are selected for matching. DREAM is able to be easily plugged into popular dataset distillation frameworks and reduce the matching iterations by 10 times without performance drop. Given sufficient training time, DREAM further provides significant improvements and achieves state-of-the-art performances.
翻译:数据蒸馏法旨在生成信息损失很少的小型数据集。 最近的先进方法主要通过匹配合成图像和关于梯度、 嵌入分布或训练轨迹的原始图像来限制样本生成过程。 虽然有各种匹配目标, 目前选择原始图像的方法仅限于天性随机抽样。 我们争辩说, 随机抽样必然涉及靠近决定边界的样本, 这可能提供大或吵闹的匹配目标。 此外, 随机抽样无法保证样本分布的平衡性和多样性。 这些因素加在一起导致大规模优化振荡和降低匹配效率。 因此, 我们提出一个新的匹配战略, 名为\ textbf{ resent\ resent\ textb{ a} a} attitual f{M}atchinging (DREAM), 在那里只选择有代表性的原始图像进行匹配。 DREAM 能够很容易地插入流行的数据集进一步蒸馏框架, 并减少匹配的匹配率效率。 因此, 我们提出一个新的匹配策略叫做\ textbf{D} data} dres develop the press 足够的时间, 提供有效的改进。</s>