Large-scale neural network models combining text and images have made incredible progress in recent years. However, it remains an open question to what extent such models encode compositional representations of the concepts over which they operate, such as correctly identifying ''red cube'' by reasoning over the constituents ''red'' and ''cube''. In this work, we focus on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way (e.g., differentiating ''cube behind sphere'' from ''sphere behind cube''). In order to inspect the performance of CLIP, we compare several architectures from research on compositional distributional semantics models (CDSMs), a line of research that attempts to implement traditional compositional linguistic structures within embedding spaces. We find that CLIP can compose concepts in a single-object setting, but in situations where concept binding is needed, performance drops dramatically. At the same time, CDSMs also perform poorly, with best performance at chance level.
翻译:大规模的神经网络模型通过结合文本和图像在最近几年取得了惊人的进展。然而,这样的模型在多大程度上编码了它们操作的概念的组合表示,比如通过正确识别“红色立方体”来对“红”和“立方体”进行推理,这仍然是一个未解决的问题。在这项工作中,我们重点关注了一个大型预训练的视觉与语言模型(CLIP)编码组合概念以及以结构敏感的方式绑定变量的能力,例如区分“立方体在球后”和“球在立方体后”。为了检查CLIP的表现,我们比较了几个基于组成分布语义模型(CDSMs)的体系结构,这是一种研究在嵌入空间中实现传统组成语言结构的研究方向。我们发现,CLIP可以在单个物体的情况下组合概念,但在需要概念绑定的情况下,表现急剧下降。同时,CDSMs的表现也很差,最佳表现仅为随机水平。