Progress in machine learning has been driven in large part by massive increases in data. However, large web-scale datasets such as LAION are largely uncurated beyond searches for exact duplicates, potentially leaving much redundancy. Here, we introduce SemDeDup, a method which leverages embeddings from pre-trained models to identify and remove semantic duplicates: data pairs which are semantically similar, but not exactly identical. Removing semantic duplicates preserves performance and speeds up learning. Analyzing a subset of LAION, we show that SemDeDup can remove 50% of the data with minimal performance loss, effectively halving training time. Moreover, performance increases out of distribution. Also, analyzing language models trained on C4, a partially curated dataset, we show that SemDeDup improves over prior approaches while providing efficiency gains. SemDeDup provides an example of how simple ways of leveraging quality embeddings can be used to make models learn faster with less data.
翻译:机器学习的进展在很大程度上受到数据量巨大增加的推动。但大规模网络数据集,如LAION,除了搜寻完全重复的数据之外,往往没有经过精选,存在大量冗余。为此,我们介绍了SemDeDup,一种利用预训练模型嵌入来识别和删除同义数据的方法:即两个数据在语义上相似但不完全相同。删除同义数据能够保持性能并加速学习。通过分析LAION的一个子集,我们展示SemDeDup可以在性能损失最小的情况下删除50%的数据,从而有效减少训练时间。此外,该方法使得模型在分布范围之外的数据上表现出增强的性能。另外,通过分析在部分精选数据集C4上训练的语言模型,我们表明SemDeDup要优于以前的方法,并提供了效率方面的收益。SemDeDup提供了一个利用优质嵌入来让模型以更少的数据更快地学习的简单方法。