Contrastive Learning has recently received interest due to its success in self-supervised representation learning in the computer vision domain. However, the origins of Contrastive Learning date as far back as the 1990s and its development has spanned across many fields and domains including Metric Learning and natural language processing. In this paper we provide a comprehensive literature review and we propose a general Contrastive Representation Learning framework that simplifies and unifies many different contrastive learning methods. We also provide a taxonomy for each of the components of contrastive learning in order to summarise it and distinguish it from other forms of machine learning. We then discuss the inductive biases which are present in any contrastive learning system and we analyse our framework under different views from various sub-fields of Machine Learning. Examples of how contrastive learning has been applied in computer vision, natural language processing, audio processing, and others, as well as in Reinforcement Learning are also presented. Finally, we discuss the challenges and some of the most promising future research directions ahead.
翻译:最近,由于在计算机视觉领域的自我监督代表性学习取得了成功,反向学习最近引起了人们的兴趣。然而,自1990年代起,反向学习的起源及其发展已经跨越许多领域和领域,包括Metric Learning和自然语言处理。在本文件中,我们提供了全面的文献审查,并提出了一个通用的反向代表性学习框架,简化和统一了许多不同的对比学习方法。我们还为反向学习的每个组成部分提供分类,以便总结它,将其与其他机器学习形式区分开来。我们接着讨论任何对比式学习系统中存在的诱导偏见,并在各种机器学习子领域的不同观点下分析我们的框架。还介绍了在计算机视觉、自然语言处理、音频处理和其他方面如何应用反向学习的例子,以及强化学习。最后,我们讨论了未来的挑战和一些最有希望的研究方向。