We present a novel problem of text-based visual question generation or TextVQG in short. Given the recent growing interest of the document image analysis community in combining text understanding with conversational artificial intelligence, e.g., text-based visual question answering, TextVQG becomes an important task. TextVQG aims to generate a natural language question for a given input image and an automatically extracted text also known as OCR token from it such that the OCR token is an answer to the generated question. TextVQG is an essential ability for a conversational agent. However, it is challenging as it requires an in-depth understanding of the scene and the ability to semantically bridge the visual content with the text present in the image. To address TextVQG, we present an OCR consistent visual question generation model that Looks into the visual content, Reads the scene text, and Asks a relevant and meaningful natural language question. We refer to our proposed model as OLRA. We perform an extensive evaluation of OLRA on two public benchmarks and compare them against baselines. Our model OLRA automatically generates questions similar to the public text-based visual question answering datasets that were curated manually. Moreover, we significantly outperform baseline approaches on the performance measures popularly used in text generation literature.
翻译:我们提出了一个基于文本的视觉问题生成或TextVQG的简单新问题。鉴于最近文件图像分析界对将文本理解与对话人工智能相结合的兴趣日益浓厚,例如基于文本的视觉问题解答,TextVQG成为一项重要的任务。TextVQG的目的是为特定输入图像产生自然语言问题,并自动提取出一个也称为OCR符号的文本,这样OCR标语就能解答产生的问题。TextVQG是对话代理人的基本能力。然而,鉴于它具有挑战性,因为它需要深入了解现场和将视觉内容与图像中的文本连接起来的能力。为了解决文本VQG,我们提出了一种OCR一致的视觉问题生成模型,该模型将视觉内容、阅读现场文本,并询问一个相关和有意义的自然语言问题。我们称之为OCRILA。我们根据两个公共基准对OLA进行了广泛的评估,并将它们与基线进行比较。我们的OLARC模型自动生成了类似于公共文本的视觉内容内容与图像内容连接。我们用手动式制作的文本的方法,我们用来大量练习制作工具的文本。