Large language models (LMs) such as GPT-4 are very powerful and can process different kinds of natural language processing (NLP) tasks. However, it can be difficult to interpret the results due to the multi-layer nonlinear model structure and millions of parameters. Lack of understanding of how the model works can make the model unreliable and dangerous for everyday users in real-world scenarios. Most recent works exploit the weights of attention to provide explanations for model predictions. However, pure attention-based explanation is unable to support the growing complexity of the models, and cannot reason about their decision-making processes. Thus, we propose LMExplainer, a knowledge-enhanced interpretation module for language models that can provide human-understandable explanations. We use a knowledge graph (KG) and a graph attention neural network to extract the key decision signals of the LM. We further explore whether interpretation can also help AI understand the task better. Our experimental results show that LMExplainer outperforms existing LM+KG methods on CommonsenseQA and OpenBookQA. We also compare the explanation results with generated explanation methods and human-annotated results. The comparison shows our method can provide more comprehensive and clearer explanations. LMExplainer demonstrates the potential to enhance model performance and furnish explanations for the reasoning processes of models in natural language.
翻译:大型语言模型(LM)如 GPT-4 是非常强大的,可以处理不同类型的自然语言处理(NLP)任务。 但是,由于多层非线性模型结构和数百万个参数,很难解释结果。由于缺乏对模型工作原理的理解,这使得模型在现实世界的使用场景中变得不可靠和危险。最近的大多数工作利用注意力的权重来提供模型预测的解释。 但是,单纯的基于注意力的解释无法支持模型不断增长的复杂性,也不能推断出其决策过程。因此,我们提出了 LMExplainer,一种增强语言模型解释的知识解释器,可以提供人类可理解的解释。我们使用知识图和图注意力神经网络来提取 LM 的关键决策信号。 我们进一步探讨解释是否也有助于 AI 更好地理解任务。我们的实验结果显示,LMExplainer 在 CommonsenseQA 和 OpenBookQA 上的表现优于现有的 LM+KG 方法。 我们还将解释结果与生成的解释方法和人类注释结果进行了比较。比较显示,我们的方法可以提供更全面和更清晰的解释。LMExplainer 展示了增强模型性能和提供自然语言中模型推理过程解释的潜力。