This paper presents ExplainableFold, an explainable AI framework for protein structure prediction. Despite the success of AI-based methods such as AlphaFold in this field, the underlying reasons for their predictions remain unclear due to the black-box nature of deep learning models. To address this, we propose a counterfactual learning framework inspired by biological principles to generate counterfactual explanations for protein structure prediction, enabling a dry-lab experimentation approach. Our experimental results demonstrate the ability of ExplainableFold to generate high-quality explanations for AlphaFold's predictions, providing near-experimental understanding of the effects of amino acids on 3D protein structure. This framework has the potential to facilitate a deeper understanding of protein structures.
翻译:本文介绍了可解释的AFold,这是蛋白质结构预测的一个可解释的AI框架。尽管阿尔法Fold等AI基础方法在这一领域取得了成功,但由于深层学习模型的黑匣子性质,其预测的基本原因仍然不清楚。为了解决这个问题,我们提议了一个反事实学习框架,该框架受生物原则的启发,为蛋白质结构预测提供反事实解释,从而能够采用干法实验方法。我们的实验结果表明,可解释Fold有能力为阿尔法福德的预测提供高质量的解释,从而对氨基酸对3D蛋白结构的影响提供近乎探索性的理解。这个框架有可能促进对蛋白结构的更深入了解。