Training a model with access to human explanations can improve data efficiency and model performance on in- and out-of-domain data. Adding to these empirical findings, similarity with the process of human learning makes learning from explanations a promising way to establish a fruitful human-machine interaction. Several methods have been proposed for improving natural language processing (NLP) models with human explanations, that rely on different explanation types and mechanism for integrating these explanations into the learning process. These methods are rarely compared with each other, making it hard for practitioners to choose the best combination of explanation type and integration mechanism for a specific use-case. In this paper, we give an overview of different methods for learning from human explanations, and discuss different factors that can inform the decision of which method to choose for a specific use-case.
翻译:除这些经验调查结果外,人类学习过程的相似性使人类学习过程从解释中学习,成为建立富有成效的人类机器互动的有希望的方法。提出了几种方法来改进自然语言处理模式,并用人的解释来改进自然语言处理模式,这些方法依靠不同的解释类型和机制将这些解释纳入学习过程。这些方法很少相互比较,使实践者很难选择解释类型和整合机制的最佳组合,以用于具体的使用案例。在本文件中,我们概述了从人类解释中学习的不同方法,并讨论了可据以决定具体使用案例选择何种方法的不同因素。