In recent years, vision-language research has shifted to study tasks which require more complex reasoning, such as interactive question answering, visual common sense reasoning, and question-answer plausibility prediction. However, the datasets used for these problems fail to capture the complexity of real inputs and multimodal environments, such as ambiguous natural language requests and diverse digital domains. We introduce Mobile app Tasks with Iterative Feedback (MoTIF), a dataset with natural language commands for the greatest number of interactive environments to date. MoTIF is the first to contain natural language requests for interactive environments that are not satisfiable, and we obtain follow-up questions on this subset to enable research on task uncertainty resolution. We perform initial feasibility classification experiments and only reach an F1 score of 37.3, verifying the need for richer vision-language representations and improved architectures to reason about task feasibility.
翻译:近年来,愿景语言研究已转向研究需要更复杂推理的任务,如互动问答、视觉常识推理和问答可信预测等,但这些问题所用的数据集未能捕捉到实际投入和多式环境的复杂性,如模糊的自然语言请求和多种数字领域。我们引入了具有循环反馈的流动应用任务(MoTIF),这是一个包含迄今为止最大互动环境自然语言指令的数据集。 TIF是第一个包含关于互动环境的自然语言请求的、不值得置疑的,我们获得了关于这一子集的后续问题,以便能够对任务不确定性的解决方案进行研究。我们进行了初步的可行性分类实验,只达到了37.3分F1分,以核实需要更丰富的愿景语言表述和改进结构,以了解任务可行性。