Labeling large image datasets with attributes such as facial age or object type is tedious and sometimes infeasible. Supervised machine learning methods provide a highly accurate solution, but require manual labels which are often unavailable. Zero-shot models (e.g., CLIP) do not require manual labels but are not as accurate as supervised ones, particularly when the attribute is numeric. We propose a new approach, CLIPPR (CLIP with Priors), which adapts zero-shot models for regression and classification on unlabelled datasets. Our method does not use any annotated images. Instead, we assume a prior over the label distribution in the dataset. We then train an adapter network on top of CLIP under two competing objectives: i) minimal change of predictions from the original CLIP model ii) minimal distance between predicted and prior distribution of labels. Additionally, we present a novel approach for selecting prompts for Vision & Language models using a distributional prior. Our method is effective and presents a significant improvement over the original model. We demonstrate an improvement of 28% in mean absolute error on the UTK age regression task. We also present promising results for classification benchmarks, improving the classification accuracy on the ImageNet dataset by 2.83%, without using any labels.
翻译:标签大型图像数据集, 包括面部年龄或对象类型等属性的标签是乏味的, 有时是不可行的。 受监督的机器学习方法提供了高度准确的解决方案, 但需要手工标签, 通常无法找到。 零点显示模型( 例如 CLIP ) 不需要手工标签, 但没有像监视模型那样准确, 特别是当属性是数字时。 我们提议了一种新的方法, CLIPPR ( 具有前题的CLIP), 将零点显示的模型用于未贴标签的数据集的回归和分类。 我们的方法并不使用任何附加说明的图像。 相反, 我们假设在数据集的标签分发上之前有一个标签分发。 我们随后在两个相互竞争的目标下, 在 CLIP 上方培训一个调整器网络( 例如 CLIP ) 不需要人工标签, 但没有像原 CLIP 模型二 一样, 预测的最小的距离。 此外, 我们提出一种新的方法, 利用发行之前的分布式选择愿景和语言模型的提示。 我们的方法是有效的, 并展示了与原始模型相比有显著改进的改进。 我们展示了28% 的 的 的图像 的精确度 的标签 。