Contrastive learning is very effective at learning useful representations without supervision. Yet contrastive learning has its limitations. It may learn a shortcut that is irrelevant to the downstream task, and discard relevant information. Past work has addressed this limitation via custom data augmentations that eliminate the shortcut. This solution however does not work for data modalities that are not interpretable by humans, e.g., radio signals. For such modalities, it is hard for a human to guess which shortcuts may exist in the signal, or how they can be eliminated. Even for interpretable data, sometimes eliminating the shortcut may be undesirable. The shortcut may be irrelevant to one downstream task but important to another. In this case, it is desirable to learn a representation that captures both the shortcut information and the information relevant to the other downstream task. This paper presents information-preserving contrastive learning (IPCL), a new framework for unsupervised representation learning that preserves relevant information even in the presence of shortcuts. We empirically show that the representations learned by IPCL outperforms contrastive learning in supporting different modalities and multiple diverse downstream tasks.
翻译:在没有监督的情况下学习有用的表达方式时,反向学习非常有效。 然而,对比学习有其局限性。 它可能学习与下游任务无关的捷径, 并丢弃相关信息。 过去的工作已经通过消除快捷方式的自定义数据扩增处理这一局限性。 但是,这一解决方案对于无法被人类解释的数据模式来说并不起作用, 例如无线电信号。 对于这种模式, 人类很难猜测信号中可能存在哪些捷径, 或如何消除这些捷径。 即使对于可解释的数据来说, 有时消除捷径可能是不可取的。 快捷方式可能与下游任务无关, 但对于另一下游任务也很重要。 在这种情况下, 最好学会一种既捕捉快捷方式信息又捕捉其他下游任务相关信息的表达方式。 本文介绍了信息保存对比学习的新框架( IPCL IPCL ), 这个不受监督的代表学习的新框架保存了相关信息, 即使存在捷径 。 我们从经验上表明, IPCL 所学的表达方式在支持不同模式和多重下游任务时, 的对比学习方式。