Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.
翻译:预先培训的视觉语言模型(例如CLIP)在许多下游任务中展示了很有希望的零点概括化,并有设计得当的文字提示。最近的工作不是依靠手工设计的提示,而是利用下游任务的培训数据学习提示。虽然有效,但具体领域的数据培训使模型的概括化能力降低到看不见的新领域。在这项工作中,我们提议了测试时间快速调试(TPT),这种方法可以用单一的试样在飞行上学习适应性提示。对于图像分类,TPT通过信心选择,最大限度地优化迅速性,最大限度地减少诱变,使模型在每一个测试样本的不同扩大观点中都有一致的预测。在评估向自然分布转移的一般化时,TPT平均提高CLIP的零点头一一的精确度3.6%,超过以往需要额外任务特定培训数据的快速调试方法。在评估与隐蔽类别交叉数据时,TPT与使用额外培训数据的最先进方法保持同步。项目页面: https://azshue.github.io/TPTT。