Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multimodal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multimodal research.
翻译:多式机器学习算法旨在学习视觉-文字通信。 先前的工作表明,具有具体视觉表现的概念可能比抽象概念更容易学习。 我们为自动计算多式数据集内文字和专题的视觉具体性提供了一种算法。 我们在四种情况下采用了这种方法,从图像字幕到历史书籍中的图像/文字。 除了能够探索多式数据集中的概念外,我们的具体性能还预测了机器学习算法学习文字/视觉关系的能力。我们发现:(1) 具体概念确实更容易学习;(2) 我们所考虑的大量算法存在类似的失败案例;(3) 具体性和性能之间在数据集之间差别不一的精确积极关系。 我们最后提出了使用具体性分数促进未来多式联运研究的建议。