People are known to judge artificial intelligence using a utilitarian moral philosophy and humans using a moral philosophy emphasizing perceived intentions. But why do people judge humans and machines differently? Psychology suggests that people may have different mind perception models for humans and machines, and thus, will treat human-like robots more similarly to the way they treat humans. Here we present a randomized experiment where we manipulated people's perception of machines to explore whether people judge more human-like machines more similarly to the way they judge humans. We find that people's judgments of machines become more similar to that of humans when they perceive machines as having more agency (e.g. ability to plan, act), but not more experience (e.g. ability to feel). Our findings indicate that people's use of different moral philosophies to judge humans and machines can be explained by a progression of mind perception models where the perception of agency plays a prominent role. These findings add to the body of evidence suggesting that people's judgment of machines becomes more similar to that of humans motivating further work on differences in the judgment of human and machine actions.
翻译:人们知道使用实用道德哲学来判断人工智能,而人类则使用强调感知意图的道德哲学来判断人造智慧。但为什么人们会用不同的道德哲学来判断人类和机器呢?心理学表明,人们可能对人类和机器有不同的思维模式,因此,对人形机器人的处理方式将更加相似。在这里,我们提出了一个随机实验,我们利用人们对机器的看法来研究人们对机器的判断方式是否与对人类的判断方式更相似。我们发现,当人们对机器的判断方式认为机器具有更大的影响力(例如规划能力、行动能力),而不是更多的经验(例如感觉能力)时,人们的判断方式就会更加接近人类的判断方式。我们的研究结果表明,人们使用不同的道德哲学来判断人与机器的判断方式,可以用一种思维认知模式的演变方式来解释,在这种观念中,机构的作用是显著的。这些结论补充了各种证据,表明人们对机器的判断方法的判断方式与人类的判断方式更加相似。