Generating an image from its textual description requires both a certain level of language understanding and common sense knowledge about the spatial relations of the physical entities being described. In this work, we focus on inferring the spatial relation between entities, a key step in the process of composing scenes based on text. More specifically, given a caption containing a mention to a subject and the location and size of the bounding box of that subject, our goal is to predict the location and size of an object mentioned in the caption. Previous work did not use the caption text information, but a manually provided relation holding between the subject and the object. In fact, the used evaluation datasets contain manually annotated ontological triplets but no captions, making the exercise unrealistic: a manual step was required; and systems did not leverage the richer information in captions. Here we present a system that uses the full caption, and Relations in Captions (REC-COCO), a dataset derived from MS-COCO which allows to evaluate spatial relation inference from captions directly. Our experiments show that: (1) it is possible to infer the size and location of an object with respect to a given subject directly from the caption; (2) the use of full text allows to place the object better than using a manually annotated relation. Our work paves the way for systems that, given a caption, decide which entities need to be depicted and their respective location and sizes, in order to then generate the final image.
翻译:从文字描述中生成图像需要某种程度的语言理解和常识知识,了解所描述的物理实体的空间关系。在这项工作中,我们侧重于推断各实体之间的空间关系,这是根据文字拼凑场景过程中的一个关键步骤。更具体地说,给一个标题,其中提及一个主题以及该主题的捆绑框的位置和大小,我们的目标是预测标题中提到的对象的位置和大小。以前的工作没有使用说明文本信息,而是人工提供主题与对象之间的关联。事实上,使用的评价数据集包含人工显示的注解型三部文字,但没有说明标题,使得这项工作不现实:需要手工步骤;系统没有利用标题中较丰富的信息。我们在这里展示了一个系统,它使用整个标题和标题中的关系(REC-COCO),该数据集来自MS-COCO,它能够直接评价标题中的空间关系。我们的实验表明:(1) 能够用图表的大小和位置来推断对象的大小和位置,从而能够从图表中绘制出一个完整的文本和位置,从而能够直接绘制我们标注的页形图。