Self-supervised learning algorithms, including BERT and SimCLR, have enabled significant strides in fields like natural language processing, computer vision, and speech processing. However, these algorithms are domain-specific, meaning that new self-supervised learning algorithms must be developed for each new setting, including myriad healthcare, scientific, and multimodal domains. To catalyze progress toward domain-agnostic methods, we introduce DABS: a Domain-Agnostic Benchmark for Self-supervised learning. To perform well on DABS, an algorithm is evaluated on seven diverse domains: natural images, multichannel sensor data, English text, speech recordings, multilingual text, chest x-rays, and images with text descriptions. Each domain contains an unlabeled dataset for pretraining; the model is then is scored based on its downstream performance on a set of labeled tasks in the domain. We also present e-Mix and ShED: two baseline domain-agnostic algorithms; their relatively modest performance demonstrates that significant progress is needed before self-supervised learning is an out-of-the-box solution for arbitrary domains. Code for benchmark datasets and baseline algorithms is available at https://github.com/alextamkin/dabs.
翻译:自我监督的学习算法,包括BERT和SimCLR,在自然语言处理、计算机视觉和语音处理等领域取得了长足的进步。然而,这些算法是特定领域的,意味着必须为每个新环境开发新的自监督的学习算法,包括各种医疗保健、科学和多式联运领域。要推动在域内一组有标签的任务上的进展,我们引入DABS:一个自监督学习的域-Agnost基准。为了在DABS上表现良好,在七个不同领域评价了一种算法:自然图像、多频道传感器数据、英文文本、语音记录、多语言文本、胸前X光片和带有文本描述的图像。每个域都包含一个未标记的预培训数据集;然后,模型根据其下游性能在域内一组有标签的任务上进行评分。我们还介绍了电子Mix和ShED:两个基线域-notic算法;它们相对有限的性能表明,在自我监督学习成为任意域域域/extcommas 的外部框解决方案之前,需要取得显著的进展。