In this paper, we are committed to establishing an unified and end-to-end multi-modal network via exploring the language-guided visual recognition. To approach this target, we first propose a novel multi-modal convolution module called Language-dependent Convolution (LaConv). Its convolution kernels are dynamically generated based on natural language information, which can help extract differentiated visual features for different multi-modal examples. Based on the LaConv module, we further build the first fully language-driven convolution network, termed as LaConvNet, which can unify the visual recognition and multi-modal reasoning in one forward structure. To validate LaConv and LaConvNet, we conduct extensive experiments on four benchmark datasets of two vision-and-language tasks, i.e., visual question answering (VQA) and referring expression comprehension (REC). The experimental results not only shows the performance gains of LaConv compared to the existing multi-modal modules, but also witness the merits of LaConvNet as an unified network, including compact network, high generalization ability and excellent performance, e.g., +4.7% on RefCOCO+.
翻译:在本文中,我们致力于通过探索语言引导视觉识别,建立一个统一和端到端的多模式网络。为了实现这一目标,我们首先提出一个新的多模式演化模块,名为 " 依赖语言的革命(LaConv) " 。它的演化核心是基于自然语言信息动态生成的,它可以帮助为不同的多模式实例提取不同的视觉特征。根据LaConv模块,我们进一步建立了第一个完全由语言驱动的演化网络,称为LaConvNet,它可以将视觉识别和多模式推理统一在一个前方结构中。为了验证LaConv和LaConvNet,我们广泛试验了两种视觉和语言任务的四个基准数据集,即视觉问题回答(VQA)和表达理解(REC)。实验结果不仅显示了LaConv与现有多模式模块相比的绩效收益,而且还见证了LaConvNet作为一个统一网络的优点,包括紧凑网络、高一般化能力和优异性,例如RefCO++4.7%。