A fundamental characteristic common to both human vision and natural language is their compositional nature. Yet, despite the performance gains contributed by large vision and language pretraining, we find that - across 6 architectures trained with 4 algorithms on massive datasets - they exhibit little compositionality. To arrive at this conclusion, we introduce a new compositionality evaluation benchmark CREPE which measures two important aspects of compositionality identified by cognitive science literature: systematicity and productivity. To measure systematicity, CREPE consists of three test datasets. The three test sets are designed to test models trained on three of the popular training datasets: CC-12M, YFCC-15M, and LAION-400M. They contain 385K, 385K, and 373K image-text pairs and 237K, 210K, and 178K hard negative captions. To test productivity, CREPE contains 17K image-text pairs with nine different complexities plus 246K hard negative captions with atomic, swapping, and negation foils. The datasets are generated by repurposing the Visual Genome scene graphs and region descriptions and applying handcrafted templates and GPT-3. For systematicity, we find that model performance decreases consistently when novel compositions dominate the retrieval set, with Recall@1 dropping by up to 8%. For productivity, models' retrieval success decays as complexity increases, frequently nearing random chance at high complexity. These results hold regardless of model and training dataset size.
翻译:人类视觉和自然语言的一个共同的基本特征是其构成性质。然而,尽管通过大型视觉和语言预培训提高了绩效,我们发现,在经过大规模数据集4种算法培训的6个建筑中,经过大规模数据集4种算法培训的6个建筑中,这些建筑没有多少构成性。为了得出这一结论,我们引入了新的构成性评价基准CREPE,该基准测量了认知科学文献确定的构成性的两个重要方面:系统性和生产力。为了测量系统性,CREPE由3个测试数据集组成。3个测试组的设计是为了测试在3个广受欢迎的培训数据集中培训的模型:CC-12M、YFCC-15M和LAION-400M。它们包含385K、385K和373K图像文本配对和237K、210K和178K硬负性说明。为了测试生产率,CREPE包含17K图像配对,这9种不同的复杂性加上246K硬性负性说明。这些模型包含原子、交换和否定性变的复杂性模型。数据集是用重新配置的随机性模型生成的。数据,通过重新配置而生成,用于不断升级的图像图和升级的模型和升级的升级的模型和复制。在不断的模型和升级的模型和升级的模型和升级中,将数据中,以不断更新的复制。