Food is significant to human daily life. In this paper, we are interested in learning structural representations for lengthy recipes, that can benefit the recipe generation and food cross-modal retrieval tasks. Different from the common vision-language data, here the food images contain mixed ingredients and target recipes are lengthy paragraphs, where we do not have annotations on structure information. To address the above limitations, we propose a novel method to unsupervisedly learn the sentence-level tree structures for the cooking recipes. Our approach brings together several novel ideas in a systematic framework: (1) exploiting an unsupervised learning approach to obtain the sentence-level tree structure labels before training; (2) generating trees of target recipes from images with the supervision of tree structure labels learned from (1); and (3) integrating the learned tree structures into the recipe generation and food cross-modal retrieval procedure. Our proposed model can produce good-quality sentence-level tree structures and coherent recipes. We achieve the state-of-the-art recipe generation and food cross-modal retrieval performance on the benchmark Recipe1M dataset.
翻译:食物对人类日常生活意义重大。在本文中,我们有兴趣学习长期食谱的结构表述,这有利于食谱的制作和食品跨模式的检索任务。与共同的视觉语言数据不同,食物图像包含混合成分和目标食谱是长篇段落,我们没有关于结构信息的说明。为了解决上述限制,我们提出了一个不受监督地学习烹饪食谱的判刑水平树结构的新方法。我们的方法将几个新想法汇集到一个系统框架中:(1) 利用未经监督的学习方法,在培训前获得判决水平树结构标签;(2) 从树结构标签的监督下,从树结构标签中产生目标食谱树木;((3) 将已学过的树结构纳入食谱生成和食品跨模式的检索程序。我们提议的模型可以产生高质量的判决水平树结构以及连贯的食谱。我们实现了在基准食谱1M数据集上最先进的食谱生成和食品跨模式的检索业绩。