In this paper, we argue that the design and development of multimodal datasets for natural language processing (NLP) challenges should be enhanced in two significant respects: to more broadly represent commonsense semantic inferences; and to better reflect the dynamics of actions and events, through a substantive alignment of textual and visual information. We identify challenges and tasks that are reflective of linguistic and cognitive competencies that humans have when speaking and reasoning, rather than merely the performance of systems on isolated tasks. We introduce the distinction between challenge-based tasks and competence-based performance, and describe a diagnostic dataset, Recipe-to-Video Questions (R2VQ), designed for testing competence-based comprehension over a multimodal recipe collection (http://r2vq.org/). The corpus contains detailed annotation supporting such inferencing tasks and facilitating a rich set of question families that we use to evaluate NLP systems.
翻译:在本文中,我们主张,自然语言处理(NLP)挑战的多式联运数据集的设计和发展应从两个重要方面加强:更广泛地代表常见语义推论;通过对文字和视觉信息进行实质性的协调统一,更好地反映行动和事件的动态;我们确定反映人类在讲话和推理时的语言和认知能力的挑战和任务,而不仅仅是在孤立任务上执行系统;我们区分基于挑战的任务和基于能力的业绩,并描述诊断数据集,即用于测试基于能力的对多式食谱收藏的理解(http://r2vq.org/)的“retipe-to-Video question”(R2VQ),该数据集载有详细的说明,支持这种推断任务,并促进我们用来评价NLP系统的丰富问题家庭。