Bridging the exponentially growing gap between the numbers of unlabeled and labeled protein sequences, several studies adopted semi-supervised learning for protein sequence modeling. In these studies, models were pre-trained with a substantial amount of unlabeled data, and the representations were transferred to various downstream tasks. Most pre-training methods solely rely on language modeling and often exhibit limited performance. In this paper, we introduce a novel pre-training scheme called PLUS, which stands for Protein sequence representations Learned Using Structural information. PLUS consists of masked language modeling and a complementary protein-specific pre-training task, namely same-family prediction. PLUS can be used to pre-train various model architectures. In this work, we use PLUS to pre-train a bidirectional recurrent neural network and refer to the resulting model as PLUS-RNN. Our experiment results demonstrate that PLUS-RNN outperforms other models of similar size solely pre-trained with the language modeling in six out of seven widely used protein biology tasks. Furthermore, we present the results from our qualitative interpretation analyses to illustrate the strengths of PLUS-RNN. PLUS provides a novel way to exploit evolutionary relationships among unlabeled proteins and is broadly applicable across a variety of protein biology tasks. We expect that the gap between the numbers of unlabeled and labeled proteins will continue to grow exponentially, and the proposed pre-training method will play a larger role.
翻译:在缩小未贴标签和贴标签的蛋白质序列数量之间的巨大差距方面,一些研究采用了半监督的蛋白质序列模型模型学习方法。在这些研究中,模型经过预先培训,获得大量未贴标签的数据,并被转移到各种下游任务。大多数培训前方法完全依赖语言模型,通常表现有限。在本文中,我们引入了一个叫PLUS的新的培训前计划,即PLUS,它代表的是蛋白质序列表示方法。PLUS由隐蔽语言模型和补充性蛋白质特定培训前任务组成,即同一家庭预测。PLUS可以用于预先培训各种模型结构。在这项工作中,我们使用PLUS来预先培养双向经常性神经网络,并提到由此产生的模型,即PLUS-RNNN。我们的实验结果表明,PLUS-RNN(RNNN)比其他类似规模的模型要优于仅预先训练过的语言模型,在七种广泛使用的蛋白质生物学任务中进行模拟。此外,我们展示了我们定性解释分析的结果,用以说明PLUS(PLN-NBIBIBIL) 和BIBIBIBIL(M)B)B(C)B)B(B)B)BIL)B)B(我们将继续利用一个不甚小的模型和BIBIBIBIBIBIBIB(B(B)的模型(BLBLBLB)的模型, 和B)的模型。