Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces where some properties of the initial space, typically the notion of "neighborhood", are preserved. Such methods usually require propagation on large k-NN graphs or complicated optimization solvers. On the other hand, self-supervised learning approaches, typically used to learn representations from scratch, rely on simple and more scalable frameworks for learning. In this paper, we propose TLDR, a dimensionality reduction method for generic input spaces that is porting the recent self-supervised learning framework of Zbontar et al. (2021) to the specific task of dimensionality reduction, over arbitrary representations. We propose to use nearest neighbors to build pairs from a training set and a redundancy reduction loss to learn an encoder that produces representations invariant across such pairs. TLDR is a method that is simple, easy to train, and of broad applicability; it consists of an offline nearest neighbor computation step that can be highly approximated, and a straightforward learning process. Aiming for scalability, we focus on improving linear dimensionality reduction, and show consistent gains on image and document retrieval tasks, e.g. gaining +4% mAP over PCA on ROxford for GeM- AP, improving the performance of DINO on ImageNet or retaining it with a 10x compression.
翻译:减少尺寸的方法是不受监督的方法,这些方法可以学习低维空间,使初始空间的某些特性(通常是“邻居”的概念)得到保存。这种方法通常需要通过大型 k-NN 图形或复杂的优化解决方案进行传播。另一方面,自我监督的学习方法,通常用来从零到零学习表达,依靠简单且更可伸缩的学习框架。在本文中,我们建议了TLDR,即通用输入空间的维度减少方法,即将最近的Zbontar 等人自上而下的学习框架(2021年)移植到超越任意陈述的维度减少的具体任务。我们建议使用最近的邻居从培训集和冗余减少损失中建立配对,以学习一个能产生这种对齐差异的表达的编码。TLDR是一种简单、易于培训和广泛应用的方法;它包括离线的近的邻居计算步骤,可以非常接近,以及一个直接的学习过程。为了实现可缩放性,我们把重点放在改进直线维度减少,从一组中建立配对齐图像的组合,并显示连续的图像进展。