We present our case study that aims to help professional assessors make decisions in human assessment, in which they conduct interviews with assessees and evaluate their suitability for certain job roles. Our workshop with two industrial assessors revealed that a computational system that can extract nonverbal cues of assesses from interview videos would be beneficial to assessors in terms of supporting their decision making. In response, we developed such a system based on an unsupervised anomaly detection algorithm using multimodal behavioral features such as facial keypoints, pose, head pose, and gaze. Moreover, we enabled the system to output how much each feature contributed to the outlierness of the detected cues with the purpose of enhancing its interpretability. We then conducted a preliminary study to examine the validity of the system's output by using 20 actual assessment interview videos and involving the two assessors. The results suggested the advantages of using unsupervised anomaly detection in an interpretable manner by illustrating the informativeness of its outputs for assessors. Our approach, which builds on top of the idea of separation of observation and interpretation in human-AI teaming, will facilitate human decision making in highly contextual domains, such as human assessment, while keeping their trust in the system.
翻译:我们的案例研究旨在帮助专业评估员在人类评估中做出决策,在其中他们与评估员进行访谈,并评估他们是否适合担任某些职务。我们与两个工业评估员举办的讲习班表明,从访谈视频中提取非口头评估线索的计算系统将有利于评估员支持其决策。作为回应,我们开发了这样一个系统,其依据是未经监督的异常检测算法,使用面部关键点、姿势、头部姿势和眼神等多种行为特征。此外,我们使系统能够输出每个特征对所发现线索的超常性做出多大贡献,目的是加强其可解释性。我们随后开展了一项初步研究,通过使用20个实际评估访谈视频和两个评估员的参与,来审查系统产出的有效性。结果表明,通过说明评估员产出的丰富性,以可解释的方式使用不受监督的异常检测方法,具有优势。我们的方法建立在人类-AI团队观测和解释的分解理念之上,将促进人类在高度背景范围内的决策,例如人类评估,同时信任系统。