Explainable artificial intelligence (XAI) enables data-driven understanding of factor associations with response variables, yet communicating XAI outputs to laypersons remains challenging, hindering trust in AI-based predictions. Large language models (LLMs) have emerged as promising tools for translating technical explanations into accessible narratives, yet the integration of agentic AI, where LLMs operate as autonomous agents through iterative refinement, with XAI remains unexplored. This study proposes an agentic XAI framework combining SHAP-based explainability with multimodal LLM-driven iterative refinement to generate progressively enhanced explanations. As a use case, we tested this framework as an agricultural recommendation system using rice yield data from 26 fields in Japan. The Agentic XAI initially provided a SHAP result and explored how to improve the explanation through additional analysis iteratively across 11 refinement rounds (Rounds 0-10). Explanations were evaluated by human experts (crop scientists) (n=12) and LLMs (n=14) against seven metrics: Specificity, Clarity, Conciseness, Practicality, Contextual Relevance, Cost Consideration, and Crop Science Credibility. Both evaluator groups confirmed that the framework successfully enhanced recommendation quality with an average score increase of 30-33% from Round 0, peaking at Rounds 3-4. However, excessive refinement showed a substantial drop in recommendation quality, indicating a bias-variance trade-off where early rounds lacked explanation depth (bias) while excessive iteration introduced verbosity and ungrounded abstraction (variance), as revealed by metric-specific analysis. These findings suggest that strategic early stopping (regularization) is needed for optimizing practical utility, challenging assumptions about monotonic improvement and providing evidence-based design principles for agentic XAI systems.


翻译:可解释人工智能(XAI)能够基于数据理解因素与响应变量之间的关联,然而向非专业人士传达XAI的输出结果仍然具有挑战性,这阻碍了人们对基于AI的预测的信任。大型语言模型(LLM)已成为将技术性解释转化为易于理解的叙述的有前景的工具,但代理式AI(即LLM通过迭代优化作为自主代理运行)与XAI的结合尚未得到探索。本研究提出了一种代理式XAI框架,该框架将基于SHAP的可解释性与多模态LLM驱动的迭代优化相结合,以生成逐步增强的解释。作为一个应用案例,我们使用日本26个田地的水稻产量数据,将该框架作为一个农业推荐系统进行测试。该代理式XAI最初提供了一个SHAP结果,并在11轮优化过程中(第0至10轮)迭代探索如何通过额外分析改进解释。解释由人类专家(作物科学家)(n=12)和LLMs(n=14)根据七个指标进行评估:具体性、清晰度、简洁性、实用性、情境相关性、成本考量以及作物科学可信度。两组评估者均确认,该框架成功提升了推荐质量,从第0轮开始平均得分提高了30-33%,并在第3-4轮达到峰值。然而,过度的优化显示出推荐质量的大幅下降,这表明存在偏差-方差权衡:早期轮次缺乏解释深度(偏差),而过度迭代则引入了冗长和缺乏依据的抽象(方差),这一点通过针对具体指标的分析得以揭示。这些发现表明,需要采取策略性的早期停止(正则化)来优化实际效用,这挑战了关于单调改进的假设,并为代理式XAI系统提供了基于证据的设计原则。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员