Recent advances in natural language processing and computer vision have led to AI models that interpret simple scenes at human levels. Yet, we do not have a complete understanding of how humans and AI models differ in their interpretation of more complex scenes. We created a dataset of complex scenes that contained human behaviors and social interactions. AI and humans had to describe the scenes with a sentence. We used a quantitative metric of similarity between scene descriptions of the AI/human and ground truth of five other human descriptions of each scene. Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes. Using an experimental manipulation that occludes different spatial regions of the scenes, we assessed how machines and humans vary in utilizing regions of images to understand the scenes. Together, our results are a first step toward understanding how machines fall short of human visual reasoning with complex scenes depicting human behaviors.
翻译:自然语言处理和计算机视觉方面最近的进展导致在人类层面解释简单场景的人工智能模型。然而,我们并不完全了解人类和人工智能模型在解释更复杂的场景方面有何差异。我们创建了一个包含人类行为和社会互动的复杂场景的数据集。大赦国际和人类不得不用一句话描述场景。我们用一个量化的尺度来衡量人工智能/人类的场景描述与人类对每个场景的另外五种人类描述的实地真实的相似性。结果显示,机器/人类协议的场景描述远低于我们复杂场景的人类/人类协议。我们利用一个实验性操纵,将不同的场景空间区域置于不同的空间区域,我们评估了机器和人类在利用图像区域了解场景方面有何差异。我们的结果共同是了解机器如何在描绘人类行为的复杂场景中缺少人类视觉推理的第一个步骤。