The large-scale pre-trained vision language models (VLM) have shown remarkable domain transfer capability on natural images. However, it remains unknown whether this capability can also apply to the medical image domain. This paper thoroughly studies the knowledge transferability of pre-trained VLMs to the medical domain, where we show that well-designed medical prompts are the key to elicit knowledge from pre-trained VLMs. We demonstrate that by prompting with expressive attributes that are shared between domains, the VLM can carry the knowledge across domains and improve its generalization. This mechanism empowers VLMs to recognize novel objects with fewer or without image samples. Furthermore, to avoid the laborious manual designing process, we develop three approaches for automatic generation of medical prompts, which can inject expert-level medical knowledge and image-specific information into the prompts for fine-grained grounding. We conduct extensive experiments on thirteen different medical datasets across various modalities, showing that our well-designed prompts greatly improve the zero-shot performance compared to the default prompts, and our fine-tuned models surpass the supervised models by a significant margin.
翻译:大规模预先培训的视觉语言模型(VLM)显示了在自然图像方面的非凡域域传输能力。然而,这一能力能否也适用于医学图像领域,目前还不清楚。本文透彻地研究了预先培训的VLM向医学领域的知识可转让性,我们在这里表明,设计完善的医疗提示是从预先培训的VLM获得知识的关键。我们证明,通过借助不同领域之间共享的表达属性,VLM可以将知识传遍各个领域,并改进其普及性。这一机制使VLMs能够认清少少或没有图像样本的新型物体。此外,为避免人工设计过程,我们开发了三种自动生成医学提示的方法,这可以将专家一级的医学知识和特定图像信息注入精密地面的提示中。我们对不同模式的13个不同的医疗数据集进行了广泛的实验,表明,我们精心设计的提示与默认提示相比,大大改进了零发性功能,我们经过微调的模型大大超越了监督模型。