NLP models are susceptible to learning spurious biases (i.e., bugs) that work on some datasets but do not properly reflect the underlying task. Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, then using the feedback to update the model. While existing model debugging methods have shown promise, their prototype-level implementations provide limited practical utility. Thus, we propose XMD: the first open-source, end-to-end framework for explanation-based model debugging. Given task- or instance-level explanations, users can flexibly provide various forms of feedback via an intuitive, web-based UI. After receiving user feedback, XMD automatically updates the model in real time, by regularizing the model so that its explanations align with the user feedback. The new model can then be easily deployed into real-world applications via Hugging Face. Using XMD, we can improve the model's OOD performance on text classification tasks by up to 18%.
翻译:NLP 模型容易学习某些数据集工作但不适当反映基本任务的虚假偏差(如错误) 。 基于解释的模型调试旨在通过展示人类用户对模型行为的解释来解决虚假偏差,请用户对行为作出反馈,然后利用反馈更新模型。虽然现有的模型调试方法有希望,但其原型级实施提供了有限的实用用途。因此,我们提议XMD:第一个用于解释性模型调试的开放源头、端到端框架。根据任务或实例级的解释,用户可以通过直观、基于网络的UI灵活提供各种形式的反馈。在收到用户反馈后,XMD将实时自动更新模型,使其解释与用户反馈相一致。然后,新模型可以很容易地通过Hugging Face应用到真实世界应用程序中。使用 XMD,我们可以将模型在文本分类任务上的OOD表现提高到18 % 。