When conducting user studies to ascertain the usefulness of model explanations in aiding human decision-making, it is important to use real-world use cases, data, and users. However, this process can be resource-intensive, allowing only a limited number of explanation methods to be evaluated. Simulated user evaluations (SimEvals), which use machine learning models as a proxy for human users, have been proposed as an intermediate step to select promising explanation methods. In this work, we conduct the first SimEvals on a real-world use case to evaluate whether explanations can better support ML-assisted decision-making in e-commerce fraud detection. We study whether SimEvals can corroborate findings from a user study conducted in this fraud detection context. In particular, we find that SimEvals suggest that all considered explainers are equally performant, and none beat a baseline without explanations -- this matches the conclusions of the original user study. Such correspondences between our results and the original user study provide initial evidence in favor of using SimEvals before running user studies. We also explore the use of SimEvals as a cheap proxy to explore an alternative user study set-up. We hope that this work motivates further study of when and how SimEvals should be used to aid in the design of real-world evaluations.
翻译:在进行用户研究以确定模型解释对帮助人类决策的有用性时,必须使用真实世界使用案例、数据和用户。然而,这一过程可能耗费大量资源,只能评估数量有限的解释方法。模拟用户评价(SimEvals),使用机器学习模型作为人类用户的代言人,作为选择有希望的解释方法的中间步骤。在这项工作中,我们在真实世界使用案例上进行第一次SimEvals,以评价解释是否可以更好地支持电子商务欺诈检测中ML协助的决策。我们研究SimEvals能否证实在欺诈检测中进行的一项用户研究的结果。特别是,我们发现SimEvals表明,所有认为的解释者都具有同等性,没有任何人能在没有解释的情况下跳过一个基线 -- -- 这与原始用户研究的结论是一致的。在进行用户研究之前,我们的结果和原始用户研究之间的这些对应关系提供了初步证据。我们还探索使用SimEvals作为廉价的代言人,以探讨在欺诈检测中进行替代用户研究的替代用户援助设置。我们希望,在SimE工作上如何鼓励这项工作进行真正的设计研究。