Algorithmic case-based decision support provides examples to help human make sense of predicted labels and aid human in decision-making tasks. Despite the promising performance of supervised learning, representations learned by supervised models may not align well with human intuitions: what models consider as similar examples can be perceived as distinct by humans. As a result, they have limited effectiveness in case-based decision support. In this work, we incorporate ideas from metric learning with supervised learning to examine the importance of alignment for effective decision support. In addition to instance-level labels, we use human-provided triplet judgments to learn human-compatible decision-focused representations. Using both synthetic data and human subject experiments in multiple classification tasks, we demonstrate that such representation is better aligned with human perception than representation solely optimized for classification. Human-compatible representations identify nearest neighbors that are perceived as more similar by humans and allow humans to make more accurate predictions, leading to substantial improvements in human decision accuracies (17.8% in butterfly vs. moth classification and 13.2% in pneumonia classification).
翻译:以案例为基础的分析决策支持提供了实例,帮助人类理解预期的标签,并帮助人类完成决策任务。尽管监督性学习的表现很有希望,但受监督模型所学的表述可能与人类直觉不相符:模型认为相似的范例可以被视为人类的区别。因此,在基于案例的决策支持方面,这些模型的效力有限。在这项工作中,我们采纳了有监督性学习的衡量学习理念,以审查协调有效决策支持的重要性。除了实例级标签外,我们还利用由人类提供的三重判断来学习人与人兼容的、以决定为重点的表述。在多种分类任务中使用合成数据和人类主题实验,我们证明这种表述与人类的认知更加一致,而不是仅仅在分类方面得到优化。 人与人相近的邻居被人类认为更为相似,并允许人类做出更准确的预测,从而大大改进了人类决策的适应度(蝴蝶对脂肪的分类为17.8%,肺炎分类为13.2%)。</s>