Algorithm aversion occurs when humans are reluctant to use algorithms despite their superior performance. Studies show that giving users outcome control by providing agency over how models' predictions are incorporated into decision-making mitigates algorithm aversion. We study whether algorithm aversion is mitigated by process control, wherein users can decide what input factors and algorithms to use in model training. We conduct a replication study of outcome control, and test novel process control study conditions on Amazon Mechanical Turk (MTurk) and Prolific. Our results partly confirm prior findings on the mitigating effects of outcome control, while also forefronting reproducibility challenges. We find that process control in the form of choosing the training algorithm mitigates algorithm aversion, but changing inputs does not. Furthermore, giving users both outcome and process control does not reduce algorithm aversion more than outcome or process control alone. This study contributes to design considerations around mitigating algorithm aversion.
翻译:算法嫌疑是指人类即使知道算法的表现优于人类专家,仍然不愿使用算法。研究表明,给予用户结果控制,即让用户可以决定如何将模型的预测结果纳入决策中,可以减轻算法嫌疑。本研究探讨了会不会通过过程控制减轻这种嫌疑,其中用户可以决定在模型训练中使用哪些输入因素和算法。我们在Amazon Mechanical Turk (MTurk)和Prolific上进行了结果控制的复制研究,并测试了关于过程控制的新条件。我们的结果在一定程度上证实了之前关于结果控制减轻算法嫌疑的发现,同时也突出了可再现性的挑战。我们发现,通过选择训练算法来进行的过程控制可以减轻算法嫌疑,但更改输入则不行。此外,同时给用户结果和过程控制并不能比单独给用户结果控制或过程控制来减轻算法嫌疑更多。本研究为缓解算法嫌疑提供了设计方面的考虑。