In self-supervised representation learning, a common idea behind most of the state-of-the-art approaches is to enforce the robustness of the representations to predefined augmentations. A potential issue of this idea is the existence of completely collapsed solutions (i.e., constant features), which are typically avoided implicitly by carefully chosen implementation details. In this work, we study a relatively concise framework containing the most common components from recent approaches. We verify the existence of complete collapse and discover another reachable collapse pattern that is usually overlooked, namely dimensional collapse. We connect dimensional collapse with strong correlations between axes and consider such connection as a strong motivation for feature decorrelation (i.e., standardizing the covariance matrix). The gains from feature decorrelation are verified empirically to highlight the importance and the potential of this insight.
翻译:在自我监督的代表学习中,大多数最先进的方法背后的一个共同点是强制要求对预先定义的扩增进行代表的稳健性。这种想法的一个潜在问题是存在完全崩溃的解决办法(即常态特征),通常通过仔细选择的实施细节来隐含地避免这些解决办法。在这项工作中,我们研究一个比较简洁的框架,其中包含最近方法中最常见的组成部分。我们核查完全崩溃的存在并发现另一个通常被忽视的可达到的崩溃模式,即立体崩溃。我们将宇宙崩溃与轴之间的强势关联连接起来,并将这种关联视为特征变异关系(即使变异矩阵标准化)的强烈动机。从特征变异中获得的收益经过经验验证,以突出这一洞察力的重要性和潜力。