Protein representation learning methods have shown great potential to yield useful representation for many downstream tasks, especially on protein classification. Moreover, a few recent studies have shown great promise in addressing insufficient labels of proteins with self-supervised learning methods. However, existing protein language models are usually pretrained on protein sequences without considering the important protein structural information. To this end, we propose a novel structure-aware protein self-supervised learning method to effectively capture structural information of proteins. In particular, a well-designed graph neural network (GNN) model is pretrained to preserve the protein structural information with self-supervised tasks from a pairwise residue distance perspective and a dihedral angle perspective, respectively. Furthermore, we propose to leverage the available protein language model pretrained on protein sequences to enhance the self-supervised learning. Specifically, we identify the relation between the sequential information in the protein language model and the structural information in the specially designed GNN model via a novel pseudo bi-level optimization scheme. Experiments on several supervised downstream tasks verify the effectiveness of our proposed method.The code of the proposed method is available in \url{https://github.com/GGchen1997/STEPS_Bioinformatics}.
翻译:蛋白质表示学习方法已显示出产生许多下游任务所需的有用表示的巨大潜力,特别是在蛋白质分类方面表现出色。此外,一些最近的研究表明,利用自监督学习方法可以解决蛋白质标签不足的问题。然而,现有蛋白质语言模型通常是在不考虑重要的蛋白质结构信息的情况下针对蛋白质序列进行预训练的。为此,我们提出了一种新颖的结构感知蛋白质自监督学习方法,以有效捕捉蛋白质的结构信息。具体而言,我们预先设计了一个严谨的图神经网络 (GNN) 模型,通过一对残基距离的角度和二面角,分别预训练以保留蛋白质结构信息。此外,我们提出利用在蛋白质序列预训练的蛋白质语言模型来增强自监督学习。具体而言,借助一个新颖的伪双层优化方案,我们确定了蛋白质语言模型中的顺序信息与特殊设计的 GNN 模型中的结构信息之间的关系。几个监督式下游任务的实验验证了我们所提出方法的有效性。该方法的代码可在 \url{https://github.com/GGchen1997/STEPS_Bioinformatics} 上找到。