Computational food analysis (CFA), a broad set of methods that attempt to automate food understanding, naturally requires analysis of multi-modal evidence of a particular food or dish, e.g. images, recipe text, preparation video, nutrition labels, etc. A key to making CFA possible is multi-modal shared subspace learning, which in turn can be used for cross-modal retrieval and/or synthesis, particularly, between food images and their corresponding textual recipes. In this work we propose a simple yet novel architecture for shared subspace learning, which is used to tackle the food image-to-recipe retrieval problem. Our proposed method employs an effective transformer based multilingual recipe encoder coupled with a traditional image embedding architecture. Experimental analysis on the public Recipe1M dataset shows that the subspace learned via the proposed method outperforms the current state-of-the-arts (SoTA) in food retrieval by a large margin, obtaining recall@1 of 0.64. Furthermore, in order to demonstrate the representational power of the learned subspace, we propose a generative food image synthesis model conditioned on the embeddings of recipes. Synthesized images can effectively reproduce the visual appearance of paired samples, achieving R@1 of 0.68 in the image-to-recipe retrieval experiment, thus effectively capturing the semantics of the textual recipe.
翻译:计算食物分析(CFA)是试图使食物理解自动化的一套广泛方法,它当然需要分析特定食物或菜盘的多模式证据,例如图像、食谱文本、准备视频、营养标签等。 使非洲金融共同体成为可能的关键是多模式共享子空间学习,这反过来可以用于跨模式检索和(或)合成,特别是食品图象及其对应文本食谱之间的交叉模式检索和(或)合成。在这项工作中,我们提出了一个共享子空间学习的简单而新颖的结构,用于解决食品图象到回收问题。我们提议的方法采用了一种有效的变异器,其基础是多语种食谱编码器,加上传统的图像嵌入结构。关于公共Recipe1M数据集的实验分析表明,通过拟议方法所学的子空间大大地超越了食品回收的现状(SoTA),获得0.64的回顾@1。此外,为了展示学习过的子空间的代表性,我们提议采用一种基于基因的食品图谱化图像转换模型,从而有效地将图像复制成图像的图像复制成。