Large language models (LLMs) increasingly produce natural language explanations, yet these explanations often lack faithfulness, and they do not reliably reflect the evidence the model uses to decide. We introduce FaithLM, a model-agnostic framework that evaluates and improves the faithfulness of LLM explanations without token masking or task-specific heuristics. FaithLM formalizes explanation faithfulness as an intervention property: a faithful explanation should yield a prediction shift when its content is contradicted. Theoretical analysis shows that the resulting contrary-hint score is a sound and discriminative estimator of faithfulness. Building on this principle, FaithLM iteratively refines both the elicitation prompt and the explanation to maximize the measured score. Experiments on three multi-domain datasets and multiple LLM backbones demonstrate that FaithLM consistently increases faithfulness and produces explanations more aligned with human rationales than strong self-explanation baselines. These findings highlight that intervention-based evaluation, coupled with iterative optimization, provides a principled route toward faithful and reliable LLM explanations.
翻译:大型语言模型(LLM)日益频繁地生成自然语言解释,然而这些解释往往缺乏可信性,不能可靠地反映模型决策所依据的证据。本文提出FaithLM,一个与模型无关的框架,用于评估和改进LLM解释的可信性,无需依赖词元掩码或任务特定的启发式方法。FaithLM将解释的可信性形式化为一种干预属性:可信的解释在其内容被否定时应导致预测偏移。理论分析表明,由此产生的反提示分数是可信性的一个可靠且具有区分度的估计量。基于此原理,FaithLM通过迭代优化引导提示和解释本身,以最大化所测得的分数。在三个多领域数据集和多种LLM骨干网络上进行的实验表明,与强大的自解释基线方法相比,FaithLM能持续提升解释的可信性,并生成更符合人类推理依据的解释。这些发现凸显了基于干预的评估方法,结合迭代优化,为获得可信且可靠的LLM解释提供了一条原则性的路径。