Traditional computer vision models are trained to predict a fixed set of predefined categories. Recently, natural language has been shown to be a broader and richer source of supervision that provides finer descriptions to visual concepts than supervised "gold" labels. Previous works, such as CLIP, use InfoNCE loss to train a model to predict the pairing between images and text captions. CLIP, however, is data hungry and requires more than 400M image-text pairs for training. The inefficiency can be partially attributed to the fact that the image-text pairs are noisy. To address this, we propose OTTER (Optimal TransporT distillation for Efficient zero-shot Recognition), which uses online entropic optimal transport to find a soft image-text match as labels for contrastive learning. Based on pretrained image and text encoders, models trained with OTTER achieve strong performance with only 3M image text pairs. Compared with InfoNCE loss, label smoothing, and knowledge distillation, OTTER consistently outperforms these baselines in zero shot evaluation on Google Open Images (19,958 classes) and multi-labeled ImageNet 10K (10032 classes) from Tencent ML-Images. Over 42 evaluations on 7 different dataset/architecture settings x 6 metrics, OTTER outperforms (32) or ties (2) all baselines in 34 of them.
翻译:传统计算机视觉模型经过培训,可以预测一套固定的预定义类别。 最近,自然语言被显示为比受监督的“黄金”标签更广泛、更富的监管来源,为视觉概念提供比受监督的“黄金”标签更精细的描述。 之前的工作,例如 CLIP, 使用InfoNCE 损失来培训模型, 以预测图像和文本标题的配对。 但是, CLIP 数据饥饿, 需要400M多张图像- 文本配对来进行培训。 效率低下的部分原因在于图像- 文本配对是吵闹的。 为了解决这个问题, 我们建议 Ottel (Opimal TransporT 蒸馏为高效零弹识别), 使用在线最佳传输工具查找软图像- 文本匹配作为对比性学习标签 。 根据预先培训的图像和文本编码, 接受Ottle TATER 培训的模型只有3M 文本配对才能取得强的性能。 与InfNCEE损失、标签平滑、知识蒸馏、 Otter 一致地在谷系统 6-x- xxxxxx 328 上的所有图像- mIT- 10- mal- dism- 类中, (19, lax- mass- lax- 10- lax- lax mass- lax 10- la- 10- dism- sl) la- lax lax lax 10- sq- sal- dirviald 10- slas- dlas- slas- slection slas- slex sle) lad-d-d slex slas- slex slex slevel sle) ass slex slex slex slex slex slex slex slass slex slex slections slections sle) 10- dlass sal lad sal lad sal lad- dal- dal lad sal- sal- sal- dald sal- dal- sal- dal lad lax 10- dle) la