Current approaches to Explainable AI (XAI) face a "Scalability-Stability Dilemma." Post-hoc methods (e.g., LIME, SHAP) may scale easily but suffer from instability, while supervised explanation frameworks (e.g., TED) offer stability but require prohibitive human effort to label every training instance. This paper proposes a Hybrid LRR-TED framework that addresses this dilemma through a novel "Asymmetry of Discovery." When applied to customer churn prediction, we demonstrate that automated rule learners (GLRM) excel at identifying broad "Safety Nets" (retention patterns) but struggle to capture specific "Risk Traps" (churn triggers)-a phenomenon we term the Anna Karenina Principle of Churn. By initialising the explanation matrix with automated safety rules and augmenting it with a Pareto-optimal set of just four human-defined risk rules, our approach achieves 94.00% predictive accuracy. This configuration outperforms the full 8-rule manual expert baseline while reducing human annotation effort by 50%, proposing a shift in the paradigm for Human-in-the-Loop AI: moving experts from the role of "Rule Writers" to "Exception Handlers."


翻译:当前可解释人工智能(XAI)方法面临"可扩展性-稳定性困境"。事后解释方法(如LIME、SHAP)虽易于扩展但存在不稳定性,而监督式解释框架(如TED)虽能提供稳定性,却需耗费大量人力标注每个训练实例。本文提出一种混合LRR-TED框架,通过新颖的"发现不对称性"机制解决该困境。在客户流失预测的应用中,我们发现自动化规则学习器(GLRM)擅长识别广泛的"安全网"(留存模式),却难以捕捉特定的"风险陷阱"(流失诱因)——这一现象被我们称为流失预测的安娜·卡列尼娜原则。通过以自动化安全规则初始化解释矩阵,并辅以帕累托最优的四条人工定义风险规则,我们的方法实现了94.00%的预测准确率。该配置在超越完整8规则人工专家基准的同时,将人工标注工作量降低50%,从而提出人机协同AI范式的转变:使专家从"规则编写者"转变为"异常处理者"。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员