Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques. This becomes increasingly important in case such models are applied to the chemistry domain, for its potential impact on humans' health, e.g, toxicity analysis in pharmacology. In this paper, we present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction t asks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. Given a trained DGN, we train a reinforcement learning based generator to output counterfactual explanations. At each step, MEG feeds the current candidate counterfactual into the DGN, collects the prediction and uses it to reward the RL agent to guide the exploration. Furthermore, we restrict the action space of the agent in order to only keep actions that maintain the molecule in a valid state. We discuss the results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighbourhood of a molecule.
翻译:可以解释的AI(XAI)是一个研究领域,目的是提高可信任性,并启迪不透明的机器学习技术的隐蔽机制,如果这些模型应用于化学领域,对人体健康的潜在影响,例如药理学中的毒性分析,则其重要性日益增大。在本文中,我们提出了一个新颖的方法,以解决分子属性预测t(名为MEG(分子解释生成器))背景下深图网络的解释问题。我们为在结构高度相似和不同预测特性的(有效)化合物下进行的具体预测提供了信息反事实解释。在经过培训的DGN的情况下,我们训练了一个基于强化的学习生成器,以产生反事实的解释。在每一步骤中,MEG将当前候选的反事实纳入DGN,收集预测并利用它来奖励RL代理来指导勘探。此外,我们限制该代理商的行动空间,以便只保持维持分子的有效状态的行动。我们讨论结果,显示模型如何向非ML专家传递关键洞察到分子周围的学习模式焦点。