Vision-Language Pretraining (VLP) and Foundation models have been the go-to recipe for achieving SoTA performance on general benchmarks. However, leveraging these powerful techniques for more complex vision-language tasks, such as cooking applications, with more structured input data, is still little investigated. In this work, we propose to leverage these techniques for structured-text based computational cuisine tasks. Our strategy, dubbed VLPCook (Structured Vision-Language Pretraining for Computational Cooking), first transforms existing image-text pairs to image and structured-text pairs. This allows to pretrain our VLPCook model using VLP objectives adapted to the strutured data of the resulting datasets, then finetuning it on downstream computational cooking tasks. During finetuning, we also enrich the visual encoder, leveraging pretrained foundation models (e.g. CLIP) to provide local and global textual context. VLPCook outperforms current SoTA by a significant margin (+3.3 Recall@1 absolute improvement) on the task of Cross-Modal Food Retrieval on the large Recipe1M dataset. Finally, we conduct further experiments on VLP to validate their importance, especially on the Recipe1M+ dataset. The code will be made publicly available.
翻译:视觉语言预演(VLP)和基金会模型一直是在一般基准上实现 SoTA业绩的配方。 但是,利用这些强大的技术来完成更复杂的视觉语言任务,例如烹饪应用程序,加上结构化的输入数据,仍然很少被调查。 在这项工作中,我们提议利用这些技术来完成基于结构文本的计算烹饪任务。我们的战略,称为VLPCook(结构化的视觉语言预演),首先将现有的图像-文字配对转换成图像和结构化的文本配对。这样,就可以利用VLPCook模型为更复杂的视觉语言任务做准备,例如烹饪应用更结构化的输入数据应用程序,然后对下游计算烹饪任务进行微调。在微调过程中,我们还利用预先训练的基础模型(如CLIP)来提供本地和全球的文字背景。VLPCook将用一个显著的比值(+3.3 recall@1 绝对改进) 来预先将现有的VLPTA模型转换成我们的VLPTA模型。 在最终的跨式数据操作中,我们做了一个特殊的校验制数据,我们最高级的SDMIP1 特别的校准。