Human-AI collaboration (HAIC) in decision-making aims to create synergistic teaming between human decision-makers and AI systems. Learning to Defer (L2D) has been presented as a promising framework to determine who among humans and AI should take which decisions in order to optimize the performance and fairness of the combined system. Nevertheless, L2D entails several often unfeasible requirements, such as the availability of predictions from humans for every instance or ground-truth labels independent from said decision-makers. Furthermore, neither L2D nor alternative approaches tackle fundamental issues of deploying HAIC in real-world settings, such as capacity management or dealing with dynamic environments. In this paper, we aim to identify and review these and other limitations, pointing to where opportunities for future research in HAIC may lie.
翻译:人类-大赦国际在决策方面的协作旨在建立人类决策者与大赦国际系统之间的协同协作; 向Defer(L2D)学习是一个很有希望的框架,可以用来确定人类和大赦国际中谁应当做出何种决定,以优化综合系统的绩效和公平性; 然而,L2D包含若干往往不可行的要求,例如提供人类的预测,或提供独立于上述决策者的地面真实标签;此外,无论是L2D还是替代方法,都无法解决在现实世界环境中部署HAIC的基本问题,例如能力管理或应对动态环境,我们在本文件中旨在查明和审查这些限制和其他限制,指出未来HAIC研究的机会可能在哪里。