Learning visual representations from natural language supervision has recently shown great promise in a number of pioneering works. In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets/tasks. However, it remains a challenge to evaluate the transferablity of these foundation models due to the lack of easy-to-use toolkits for fair benchmarking. To tackle this, we build ELEVATER (Evaluation of Language-augmented Visual Task-level Transfer), the first benchmark to compare and evaluate pre-trained language-augmented visual models. Several highlights include: (i) Datasets. As downstream evaluation suites, it consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge. (ii) Toolkit. An automatic hyper-parameter tuning toolkit is developed to ensure the fairness in model adaption. To leverage the full power of language-augmented visual models, novel language-aware initialization methods are proposed to significantly improve the adaption performance. (iii) Metrics. A variety of evaluation metrics are used, including sample-efficiency (zero-shot and few-shot) and parameter-efficiency (linear probing and full model fine-tuning). We will release our toolkit and evaluation platforms for the research community.
翻译:自然语言监督的视觉表现最近在许多开创性作品中显示出很大的希望。总的来说,这些语言强化的视觉模型显示了向各种数据集/任务的巨大可转移性,然而,由于缺乏便于使用的公平基准制定工具包,评估这些基础模型的转移性仍是一项挑战。为了解决这个问题,我们建立了ELEVATER(语言强化视觉任务级别传输评估),这是比较和评价预先培训的语言强化视觉模型的第一个基准。一些亮点包括:(一) 数据集。作为下游评价套件,由20个图像分类数据集和35个对象探测数据集组成,每个数据集都有外部知识的增强。(二) 工具包。开发了一个自动超参数调整工具包,以确保模型调整的公平性。为了充分利用语言强化视觉模型的全部力量,提出了新语言认知初始化方法,以大大改进适应性业绩。(三) 矩阵。使用各种评价计量尺度,包括20个图像分类数据集和35个对象探测数据集,其中每个数据集都有外部知识的增强。 (二) 工具包:开发一个自动超参数调整工具包,以确保模型适应的公平性。为了充分利用语言强化视觉模型,提议新语言增强语言强化的初始化方法,以大幅改进适应性工作。 (三)Metricricreal-view-vial-view-vial-s-s-s-view-view-vial-vial-p-s-vial-s-s-s-compal-compal-compal-s-s-s-pal-commal-sal-com-s-s-p-p-s-pirview-s-s-pir-s-s-pal-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-sal-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-s-pir-pir-pir-s-s-s-s-s-s-s-s-s-