Semantic embeddings have advanced the state of the art for countless natural language processing tasks, and various extensions to multimodal domains, such as visual-semantic embeddings, have been proposed. While the power of visual-semantic embeddings comes from the distillation and enrichment of information through machine learning, their inner workings are poorly understood and there is a shortage of analysis tools. To address this problem, we generalize the notion of probing tasks to the visual-semantic case. To this end, we (i) discuss the formalization of probing tasks for embeddings of image-caption pairs, (ii) define three concrete probing tasks within our general framework, (iii) train classifiers to probe for those properties, and (iv) compare various state-of-the-art embeddings under the lens of the proposed probing tasks. Our experiments reveal an up to 12% increase in accuracy on visual-semantic embeddings compared to the corresponding unimodal embeddings, which suggest that the text and image dimensions represented in the former do complement each other.
翻译:语义嵌入器已经提高了无数自然语言处理任务的最新水平,并提出了多种多式联运领域的扩展,如视觉语义嵌入器等。虽然视觉语义嵌入器的力量来自通过机器学习对信息进行蒸馏和浓缩,但是它们的内部工作不易理解,分析工具也短缺。为了解决这一问题,我们将探测任务的概念推广到视觉语义嵌入器中。为此,我们(一) 讨论将嵌入成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成成