Despite the recent successes of transformer-based models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.
翻译:尽管以变压器为基础的模型在各种任务的有效性方面最近取得了成功,但其决定往往对人类来说仍然不透明。解释对于在社交媒体上进行攻击性语言或毒性检测等任务特别重要,因为手工上诉过程往往会自动引起争议内容。在这项工作中,我们提议一种方法,根据简单而有力的假设,提高这些模型的解释性:一个职位至少具有毒性,其毒性范围也最大。我们将这一假设纳入变压器模型中,根据一个职位的最大范围毒性评分一个职位,并增加培训进程,以确定正确的范围。我们发现这一方法行之有效,并可以提出超出后勤递减分析(通常被视为高度易懂的模式)所提供的质量的解释,根据一项人类研究。