AI systems are often used to make or contribute to important decisions in a growing range of applications, including criminal justice, hiring, and medicine. Since these decisions impact human lives, it is important that the AI systems act in ways which align with human values. Techniques for preference modeling and social choice help researchers learn and aggregate peoples' preferences, which are used to guide AI behavior; thus, it is imperative that these learned preferences are accurate. These techniques often assume that people are willing to express strict preferences over alternatives; which is not true in practice. People are often indecisive, and especially so when their decision has moral implications. The philosophy and psychology literature shows that indecision is a measurable and nuanced behavior -- and that there are several different reasons people are indecisive. This complicates the task of both learning and aggregating preferences, since most of the relevant literature makes restrictive assumptions on the meaning of indecision. We begin to close this gap by formalizing several mathematical \emph{indecision} models based on theories from philosophy, psychology, and economics; these models can be used to describe (indecisive) agent decisions, both when they are allowed to express indecision and when they are not. We test these models using data collected from an online survey where participants choose how to (hypothetically) allocate organs to patients waiting for a transplant.
翻译:在越来越多的应用领域,包括刑事司法、雇用和医学,大赦国际制度往往被用来作出或促进重要决定,这些应用领域包括刑事司法、雇用和医学。由于这些决定影响到人类生活,因此,重要的是,大赦国际制度的运作方式要符合人类价值观。 优惠模式和社会选择的技术有助于研究人员学习和综合人们的偏好,用于指导大赦国际的行为;因此,这些学到的偏好必须是准确的。这些技术往往假定人们愿意对替代方案表示严格的偏好;实际上并非如此。人们往往不果断,特别是当他们的决定具有道德影响时更是如此。哲学和心理学文献表明,在作出决定时是一种可以衡量和细微的行为,而且人们有几种不同的原因不果断。这使得学习和集中偏好的任务变得复杂,因为大多数相关的文献对决定的意义都作了限制性的假设。我们开始通过正式确定一些基于哲学、心理学和经济理论的模式来缩小这一差距;这些模型可以用来描述(深刻的)代理人的决定,当他们被允许使用在线调查时,当我们选择了这些选择时,当我们选择了这些实验性的机构时,他们如何使用这些实验性地分配。