Pre-trained vision-language models (VLMs) have achieved impressive results in a range of vision-language tasks. However, popular VLMs usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and deployment in real-world applications due to space, memory, and latency constraints. In this work, we introduce a distilling then pruning framework to compress large vision-language models into smaller, faster, and more accurate ones. We first shrink the size of a pre-trained large VLM and apply knowledge distillation in the vision-language pre-training stage to obtain a task-agnostic compact VLM. Then we propose a modal-adaptive pruning algorithm to automatically infer the importance of vision and language modalities for different downstream tasks and adaptively remove redundant structures and neurons in different encoders with controllable target sparsity. We apply our framework to train EfficientVLM, a fast and accurate vision-language model consisting of 6 vision layers, 3 text layers, and 3 cross-modal fusion layers, accounting for only 93 million parameters in total, which is 44.3% of the teacher model. EfficientVLM retains 98.4% performance of the teacher model and accelerates its inference speed by 2.2x. EfficientVLM achieves a large absolute improvement over previous SoTA efficient VLMs of similar sizes by a large margin on various vision-language tasks, including VQAv2 (+4.9%), NLVR2 (+5.6%), ITR (R@1 on TR +17.2%, on IR + 15.6% ) and COCO caption generation (CIDEr +6.5), demonstrating a large potential on training lightweight VLMs.
翻译:受过事先训练的视觉语言模型(VLMS)在一系列视觉语言任务中取得了令人印象深刻的成果。 但是,广受欢迎的VLMS通常由数亿个参数组成,这些参数对在现实世界应用中进行微调和部署带来挑战,因为空间、记忆和延迟限制。 在这项工作中,我们引入了一个蒸馏后再处理框架,将大型视觉语言模型压缩成更小、更快和更准确的模型。我们首先缩小了预先训练过的大型VLM的大小,在视觉语言前阶段应用知识蒸馏法,以获得一个具有任务和知觉的NLMTVM。 然后,我们提出一个模式适应性调整算法,以自动推导出各种下游任务和语言模式的重要性,并在具有可控目标宽度的不同星中,将大型视觉语言模型中的冗余结构和神经神经元压缩成小。 我们应用我们的框架来培训高效的VLMM, 一种由6个视觉层、3个文本层和3个跨模层组成的快速视觉语言模型,总共只能算出9300万个数值缩缩的VLM值参数参数参数,在983个大的模型中,这段的模型中可以加速化为98 %。