In many social-choice mechanisms the resulting choice is not the most preferred one for some of the participants, thus the need for methods to justify the choice made in a way that improves the acceptance and satisfaction of said participants. One natural method for providing such explanations is to ask people to provide them, e.g., through crowdsourcing, and choosing the most convincing arguments among those received. In this paper we propose the use of an alternative approach, one that automatically generates explanations based on desirable mechanism features found in theoretical mechanism design literature. We test the effectiveness of both of the methods through a series of extensive experiments conducted with over 600 participants in ranked voting, a classic social choice mechanism. The analysis of the results reveals that explanations indeed affect both average satisfaction from and acceptance of the outcome in such settings. In particular, explanations are shown to have a positive effect on satisfaction and acceptance when the outcome (the winning candidate in our case) is the least desirable choice for the participant. A comparative analysis reveals that the automatically generated explanations result in similar levels of satisfaction from and acceptance of an outcome as with the more costly alternative of crowdsourced explanations, hence eliminating the need to keep humans in the loop. Furthermore, the automatically generated explanations significantly reduce participants' belief that a different winner should have been elected compared to crowdsourced explanations.
翻译:在许多社会选择机制中,由此作出的选择对一些参与者来说并不是最可取的选择,因此,需要采用各种方法来证明所作选择的合理性,以提高上述参与者的接受度和满意度。提供这种解释的一种自然方法是要求人们提供这些选择,例如通过众包,并在所收到的人中选择最有说服力的论点。在本文件中,我们提议采用另一种办法,即根据理论机制设计文献中发现的适当机制特点自动产生解释。我们通过对600多名参与排名投票者进行一系列广泛的试验,即传统的社会选择机制,检验这两种方法的有效性。对结果的分析表明,解释确实影响到在这种环境中对结果的平均满意度和对结果的接受。具体地说,在结果(我们中选的候选人)对参与者来说是最不可取的选择时,解释对满意度和接受度产生积极的影响。比较分析表明,自动产生解释的结果对结果的满意度和接受程度相似,与以更昂贵的群联解释替代方法(一种典型的社会选择机制)进行测试,从而消除了将人保留在这种背景下的必要性。此外,对当选者作出不同解释会自动产生结果。