Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D molecular structure as input to learned models improves their performance for many molecular tasks. However, this information is infeasible to compute at the scale required by several real-world applications. We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs. Using methods from self-supervised learning, we maximize the mutual information between 3D summary vectors and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to improve downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Moreover, the learned representations can be effectively transferred between datasets in different molecular spaces.
翻译:分子属性预测是深层学习中增长最快的应用之一,具有重要的现实世界影响。 包括3D分子结构,作为学习模型的投入提高了它们在许多分子任务中的性能。 但是,这一信息无法按照几个现实世界应用所要求的规模进行计算。 我们提议对一个模型进行预先培训,以便了解分子的几何,而分子图只给出了2D分子图。 我们使用自监督学习的方法,最大限度地利用3D摘要矢量器与图形神经网络(GNN)的表达方式之间的相互信息,例如它们含有潜伏的3D信息。 在微调具有未知几何特征的分子时,GNN仍然生成隐含的3D信息,并可利用它改进下游任务。 我们表明,3D预培训为一系列广泛的特性提供了重大改进,例如8个量量机械特性平均减少22%的MAE。 此外,学习的表达方式可以在不同分子空间的数据集之间有效转移。