There is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To efficiently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave significantly differently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.
翻译:对算法解释其结果的需求日益增加。 到目前为止,还没有一种方法来解释排序算法产生的排名。 为了弥补这一差距,我们建议使用ListEN, 即Iistwith ExplaiNer, 来解释排名算法产生的排名。 为了高效使用ListEN, 我们训练一个神经网络来学习ListEN创造的基本解释空间; 我们称这个模型 Q-LISTEN 。 我们显示ListEN 提供了忠实的解释, Q-LISTEN 能够学习这些解释。 此外, 我们显示ListEN 在现实世界环境中使用是安全的: 当他们接触由ListEN产生的解释而不是人工产生的解释时, 新闻建议系统的用户的行为不会大相径庭。