We present a novel feature attribution method for explaining text classifiers, and analyze it in the context of hate speech detection. Although feature attribution models usually provide a single importance score for each token, we instead provide two complementary and theoretically-grounded scores -- necessity and sufficiency -- resulting in more informative explanations. We propose a transparent method that calculates these values by generating explicit perturbations of the input text, allowing the importance scores themselves to be explainable. We employ our method to explain the predictions of different hate speech detection models on the same set of curated examples from a test suite, and show that different values of necessity and sufficiency for identity terms correspond to different kinds of false positive errors, exposing sources of classifier bias against marginalized groups.
翻译:我们提出了一个解释文字分类的新型特征归属方法,并在发现仇恨言论时进行分析。虽然特征归属模型通常为每个象征提供单一重要分数,但我们却提供了两个互补的、基于理论的分数 -- -- 必要性和充足性 -- -- 从而导致更翔实的解释。我们提出了一个透明的计算方法,通过对输入文本进行明确的干扰来计算这些值,从而让重要分数本身可以解释。我们使用这种方法来解释关于不同仇恨言论识别模型预测的预测,这些预测来自一个测试套件的一套精细例子,并表明身份术语的不同必要性和充足性价值与不同类型的虚假积极错误相对应,暴露了分类者对边缘化群体的偏见来源。