Self-Supervised Learning (SSL) is a new paradigm for learning discriminative representations without labelled data and has reached comparable or even state-of-the-art results in comparison to supervised counterparts. Contrastive Learning (CL) is one of the most well-known approaches in SSL that attempts to learn general, informative representations of data. CL methods have been mostly developed for applications in computer vision and natural language processing where only a single sensor modality is used. A majority of pervasive computing applications, however, exploit data from a range of different sensor modalities. While existing CL methods are limited to learning from one or two data sources, we propose COCOA (Cross mOdality COntrastive leArning), a self-supervised model that employs a novel objective function to learn quality representations from multisensor data by computing the cross-correlation between different data modalities and minimizing the similarity between irrelevant instances. We evaluate the effectiveness of COCOA against eight recently introduced state-of-the-art self-supervised models, and two supervised baselines across five public datasets. We show that COCOA achieves superior classification performance to all other approaches. Also, COCOA is far more label-efficient than the other baselines including the fully supervised model using only one-tenth of available labelled data.
翻译:自我监督学习(CL)是学习没有贴标签的数据的歧视性表现的新模式,与受监督的对应方相比,已经达到可比甚至最先进的结果。 对比学习(CL)是SSL中最著名的方法之一,试图学习一般性的、信息化的数据表述。 CL方法大多用于计算机视觉和自然语言处理中的应用程序,其中仅使用单一传感器模式。但大多数普遍计算机应用都利用了不同传感器模式的数据。虽然现有的CL方法仅限于从一个或两个数据源学习,但我们建议COCOA(Cross mordality Contrastrative leArning),这是一种自我监督的模式,它使用一种新的客观功能,通过计算不同数据模式之间的交叉关系和尽量减少不相关实例之间的相似性,从多传感器数据中学习质量表述。我们评估COCOA的效能,与最近推出的八种最先进的自我监督模型相比,以及五个公共数据集中两个受监督的基准,我们显示COCOA使用比其他一个更高级的标签(包括其他最有监督的标签),也显示CCOA的高级标准。