Artificial intelligence and machine learning algorithms have become ubiquitous. Although they offer a wide range of benefits, their adoption in decision-critical fields is limited by their lack of interpretability, particularly with textual data. Moreover, with more data available than ever before, it has become increasingly important to explain automated predictions. Generally, users find it difficult to understand the underlying computational processes and interact with the models, especially when the models fail to generate the outcomes or explanations, or both, correctly. This problem highlights the growing need for users to better understand the models' inner workings and gain control over their actions. This dissertation focuses on two fundamental challenges of addressing this need. The first involves explanation generation: inferring high-quality explanations from text documents in a scalable and data-driven manner. The second challenge consists in making explanations actionable, and we refer to it as critiquing. This dissertation examines two important applications in natural language processing and recommendation tasks. Overall, we demonstrate that interpretability does not come at the cost of reduced performance in two consequential applications. Our framework is applicable to other fields as well. This dissertation presents an effective means of closing the gap between promise and practice in artificial intelligence.
翻译:人工智能和机器学习算法已经变得无处不在。虽然它们提供了广泛的好处,但是在决策关键领域的采用却因缺乏解释性而受到限制,特别是缺乏文本数据。此外,由于比以往任何时候有更多的数据可用,解释自动化预测变得日益重要。一般而言,用户发现难以理解基本的计算过程并与模型互动,特别是当模型未能产生结果或解释,或未能正确同时产生结果或解释时。这个问题突出表明用户越来越需要更好地了解模型的内部工作,并获得对其行动的控制。这种分析侧重于解决这一需要的两大基本挑战。首先涉及解释的产生:从可缩放和数据驱动的文本文件中推断出高质量的解释。第二个挑战在于使解释可操作,我们称之为“临界值”。这种否定性审查了自然语言处理和建议任务中的两种重要应用。总体而言,我们证明可解释性并不是在两个随之产生的应用程序中降低性能的成本。我们的框架适用于其它领域,也就是人工智能实践之间的有效结束。