Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between human decision-makers and AI systems. Learning to defer (L2D) has been presented as a promising framework to determine who among humans and AI should make which decisions in order to optimize the performance and fairness of the combined system. Nevertheless, L2D entails several often unfeasible requirements, such as the availability of predictions from humans for every instance or ground-truth labels that are independent from said humans. Furthermore, neither L2D nor alternative approaches tackle fundamental issues of deploying HAIC systems in real-world settings, such as capacity management or dealing with dynamic environments. In this paper, we aim to identify and review these and other limitations, pointing to where opportunities for future research in HAIC may lie.
翻译:人类-大赦国际在决策方面的合作旨在建立人类决策者与大赦国际系统之间的协同合作; 学习推迟(L2D)已成为一个很有希望的框架,用以确定人类和大赦国际中谁应作出何种决定,以优化综合系统的绩效和公平性; 然而,L2D提出了若干往往不可行的要求,例如提供人类的预测,或提供独立于上述人的地面真实标签;此外,无论是L2D还是替代方法,都无法解决在现实世界环境中部署HAIC系统的根本问题,例如能力管理或应对动态环境,我们在本文件中力求查明和审查这些和其他限制因素,指出未来HAIC研究的机会可能在哪里。