Current text-image approaches (e.g., CLIP) typically adopt dual-encoder architecture us- ing pre-trained vision-language representation. However, these models still pose non-trivial memory requirements and substantial incre- mental indexing time, which makes them less practical on mobile devices. In this paper, we present an effective two-stage framework to compress large pre-trained dual-encoder for lightweight text-image retrieval. The result- ing model is smaller (39% of the original), faster (1.6x/2.9x for processing image/text re- spectively), yet performs on par with or bet- ter than the original full model on Flickr30K and MSCOCO benchmarks. We also open- source an accompanying realistic mobile im- age search application.
翻译:目前的文本图像方法(如CLIP)通常采用双编码结构,即先经训练的视觉语言表示式,但这些模型仍构成非三隐性记忆要求和大量心智指数化时间,使其在移动设备上不那么实用。在本文中,我们提出了一个有效的两阶段框架,用于压缩大型预先训练的双编码器,用于轻量文字图像检索。结果生成模型较小(39%的原版),更快(1.6x/2.9x用于图像/文字再处理),但与Flickr30K和MCCO基准的原全模型相同,或比原全模型长。我们还打开了一个切合实际的移动年龄前搜索应用软件源。