Generating natural language questions from visual scenes, known as Visual Question Generation (VQG), has been explored in the recent past where large amounts of meticulously labeled data provide the training corpus. However, in practice, it is not uncommon to have only a few images with question annotations corresponding to a few types of answers. In this paper, we propose a new and challenging Few-Shot Visual Question Generation (FS-VQG) task and provide a comprehensive benchmark to it. Specifically, we evaluate various existing VQG approaches as well as popular few-shot solutions based on meta-learning and self-supervised strategies for the FS-VQG task. We conduct experiments on two popular existing datasets VQG and Visual7w. In addition, we have also cleaned and extended the VQG dataset for use in a few-shot scenario, with additional image-question pairs as well as additional answer categories. We call this new dataset VQG-23. Several important findings emerge from our experiments, that shed light on the limits of current models in few-shot vision and language generation tasks. We find that trivially extending existing VQG approaches with transfer learning or meta-learning may not be enough to tackle the inherent challenges in few-shot VQG. We believe that this work will contribute to accelerating the progress in few-shot learning research.
翻译:在最近的过去,我们探索了从视觉场景中产生的自然语言问题,即所谓的视觉问题生成(VQG),最近,我们探索了各种现有的VQG方法,以及基于FS-VQG任务的元学习和自我监督战略的受欢迎的微小解决方案。我们在实践中对两种受欢迎的现有数据集VQG和Vavis7w进行了实验。此外,我们还清理和扩展了VQG数据集,供在几眼情景下使用,加上额外的图像问题配对和额外的答案类别。我们称之为新的数据集VQG-23。我们从实验中得出的一些重要结论揭示了目前模式在微小的视野和语言生成任务中的局限性。我们发现,将现有的VQG数据集和视觉7w进行实验。此外,我们还清理和扩展了VQG数据集,供在几眼情景下使用,并增加了图像问题配对和答案类别。我们称之为新的数据集VQG-23。我们从实验中得出了几个重要结论,这些结论揭示了目前模式在微小的视野和语言生成任务中的局限性。我们发现,在不断学习的VG研究过程中可能不甚小地扩大现有的MQ方法。