In the era of deep learning, training deep neural networks often requires extensive data, leading to substantial costs. Dataset condensation addresses this by learning a small synthetic set that preserves essential information from the original large-scale dataset. Nowadays, optimization-oriented methods dominate dataset condensation for state-of-the-art (SOTA) results, but their computationally intensive bi-level optimization hinders practicality with large datasets. To enhance efficiency, as alternative solutions, Distribution-Matching (DM)-based methods reduce costs by aligning the representation distributions of real and synthetic examples. However, current DM-based methods still yield less comparable results to SOTA optimization-oriented methods. In this paper, we argue that existing DM-based methods overlook the higher-order alignment of the distributions, which may lead to sub-optimal matching results. Inspired by this, we propose a new DM-based method named as Efficient Dataset Condensation by Higher-Order Distribution Alignment (ECHO). Specifically, rather than only aligning the first-order moment of the representation distributions as previous methods, we learn synthetic examples via further aligning the higher-order moments of the representation distributions of real and synthetic examples based on the classical theory of reproducing kernel Hilbert space. Experiments demonstrate the proposed method achieves a significant performance boost while maintaining efficiency across various scenarios.
翻译:暂无翻译