People are not very good at detecting lies, which may explain why they refrain from accusing others of lying, given the social costs attached to false accusations - both for the accuser and the accused. Here we consider how this social balance might be disrupted by the availability of lie-detection algorithms powered by Artificial Intelligence. Will people elect to use lie detection algorithms that perform better than humans, and if so, will they show less restraint in their accusations? We built a machine learning classifier whose accuracy (67\%) was significantly better than human accuracy (50\%) in a lie-detection task and conducted an incentivized lie-detection experiment in which we measured participants' propensity to use the algorithm, as well as the impact of that use on accusation rates. We find that the few people (33\%) who elect to use the algorithm drastically increase their accusation rates (from 25\% in the baseline condition up to 86% when the algorithm flags a statement as a lie). They make more false accusations (18pp increase), but at the same time, the probability of a lie remaining undetected is much lower in this group (36pp decrease). We consider individual motivations for using lie detection algorithms and the social implications of these algorithms.
翻译:人们不善于发现谎言,这或许可以解释为什么他们不指责他人撒谎,因为对原告和被告来说,虚假指控都附带了社会成本。我们在这里思考这种社会平衡如何会因为存在人工智能所推动的测谎算法而受到干扰。人们会选择使用比人类更好的测谎算法吗?如果是这样,他们会不会在指控中表现出较少克制?我们建造了一个机器学习分类器,其准确性(67 ⁇ )大大高于在测谎任务中的人类准确性(50 ⁇ ),并进行了一个激励性测谎实验,在实验中我们测量参与者使用算法的倾向,以及使用这种算法对指控率的影响。我们发现少数选择使用测谎算法的人(33 ⁇ )会大幅提高指控率(从基线条件的25 ⁇ 升至86%当算法标为谎言时)。他们做了更多的虚假指控(18 ⁇ 增加),但在同一时间,继续测谎的测谎概率会降低(36pp)。我们考虑使用这些测谎的个体测算法的概率。