When conducting user studies to ascertain the usefulness of model explanations in aiding human decision-making, it is important to use real-world use cases, data, and users. However, this process can be resource-intensive, allowing only a limited number of explanation methods to be evaluated. Simulated user evaluations (SimEvals), which use machine learning models as a proxy for human users, have been proposed as an intermediate step to select promising explanation methods. In this work, we conduct the first SimEvals on a real-world use case to evaluate whether explanations can better support ML-assisted decision-making in e-commerce fraud detection. We study whether SimEvals can corroborate findings from a user study conducted in this fraud detection context. In particular, we find that SimEvals suggest that all considered explainers are equally performant, and none beat a baseline without explanations -- this matches the conclusions of the original user study. Such correspondences between our results and the original user study provide initial evidence in favor of using SimEvals before running user studies. We also explore the use of SimEvals as a cheap proxy to explore an alternative user study set-up. We hope that this work motivates further study of when and how SimEvals should be used to aid in the design of real-world evaluations.
翻译:当进行用户研究以确定模型解释在辅助人类决策方面的有用性时,使用真实的使用案例、数据和用户非常重要。然而,这个过程需要大量的资源,只能评估有限数量的解释方法。仿真用户评估(SimEvals)使用机器学习模型作为人类用户的代理被提出作为选择有前途的解释方法的中间步骤。在这项工作中,我们在一个真实案例中进行了第一次的SimEvals,以评估解释是否能更好地支持电子商务欺诈检测中的机器学习辅助决策。我们研究了SimEvals是否可以证实在这个欺诈检测环境下进行的用户研究的发现。特别是,我们发现SimEvals表明,所有考虑的解释方法的性能均相等,并且没有比没有解释的基线更好--这与原始用户研究的结论相符。我们的结果与原始用户研究的这种对应关系为使用SimEvals在进行用户研究之前提供了初步的支持。我们还探讨了SimEvals作为廉价代理人以探索替代用户研究设置的使用。我们希望这项工作鼓励进一步研究何时以及如何使用SimEvals来帮助设计真实评估。