We present Answer-Me, a task-aware multi-task framework which unifies a variety of question answering tasks, such as, visual question answering, visual entailment, visual reasoning. In contrast to previous works using contrastive or generative captioning training, we propose a novel and simple recipe to pre-train a vision-language joint model, which is multi-task as well. The pre-training uses only noisy image captioning data, and is formulated to use the entire architecture end-to-end with both a strong language encoder and decoder. Our results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results across a variety of question answering tasks. Our multi-task mixture training learns from tasks of various question intents and thus generalizes better, including on zero-shot vision-language tasks. We conduct experiments in the challenging multi-task and open-vocabulary settings and across a variety of datasets and tasks, such as VQA2.0, SNLI-VE, NLVR2, GQA, VizWiz. We observe that the proposed approach is able to generalize to unseen tasks and that more diverse mixtures lead to higher accuracy in both known and novel tasks.
翻译:我们提出“答案-我”这个任务感知的多任务框架,它统一了各种问题回答任务,例如视觉问答、视觉要求、视觉推理等。与以往使用对比性或基因化字幕培训的工作相比,我们提出了一个创新和简单的方法,用于预先培训一个视觉语言联合模型,这个模型也是多任务。培训前只使用噪音的图像字幕数据,并设计出一个使用整个结构端对端的强大语言编码和解密的系统。我们的结果显示的是最新水平的艺术性能、零光谱化、忘却的稳健性以及具有竞争力的单任务,而不同的问题解答任务则不同。我们多任务组合培训从各种问题意图的任务中学习,从而更加概括化,包括零瞄准的视觉语言任务。我们在挑战性多任务和开放语音字幕设置中以及各种数据集和任务中进行实验,例如VQA2.0、SNDLI-VE、NLVR2、GQA、Viz-W等项培训,我们提出的更高层次的任务和新颖的GVI-Wiz。我们所了解的更高层次任务和新方向,我们所了解的GVIS-Wiz。