Visual entailment (VE) is to recognize whether the semantics of a hypothesis text can be inferred from the given premise image, which is one special task among recent emerged vision and language understanding tasks. Currently, most of the existing VE approaches are derived from the methods of visual question answering. They recognize visual entailment by quantifying the similarity between the hypothesis and premise in the content semantic features from multi modalities. Such approaches, however, ignore the VE's unique nature of relation inference between the premise and hypothesis. Therefore, in this paper, a new architecture called AlignVE is proposed to solve the visual entailment problem with a relation interaction method. It models the relation between the premise and hypothesis as an alignment matrix. Then it introduces a pooling operation to get feature vectors with a fixed size. Finally, it goes through the fully-connected layer and normalization layer to complete the classification. Experiments show that our alignment-based architecture reaches 72.45\% accuracy on SNLI-VE dataset, outperforming previous content-based models under the same settings.
翻译:视觉要求 (VE) 是要确认假设文本的语义是否可以从特定前提图像中推断出来,这是最近出现的视觉和语言理解任务中的一项特殊任务。 目前,大多数现有的VE方法来自视觉问题解答方法。 它们通过量化多种模式中内容语义特征的假设和前提的相似性来识别视觉要求。 但是,这些方法忽略了VE在前提和假设之间推论的独特性。 因此,本文建议了一个新的结构,名为Aligneve, 以解决视觉引发的问题, 以关系互动方法解决视觉引发的问题。 它将前提和假设之间的关系建成一个校准矩阵。 然后, 它引入一个集合操作, 以获得固定尺寸的特性矢量。 最后, 它经过完全相连的层和常规层完成分类。 实验表明,我们的校准结构在SNLI-VE数据集上达到了72. 45 ⁇ 精确度, 超过了同一环境下以前基于内容的模式。