With the advance of large-scale model technologies, parameter-efficient transfer learning (PETL) has swept across various fields of Artificial Intelligence. Its core idea is to adapt the model to downstream tasks using only a small number of parameters. Recently, some studies have applied these techniques proven effective to multimodal tasks. However, two critical issues remain unresolved: how to further reduce the complexity with lightweight design and how to boost alignment between modalities under extremely low parameters. In this paper, we propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges. Considering the redundancy in existing architectures, we first utilize the mode approximation to generate few trainable parameters to implement the multi-modal prompt tuning, which explores the low intrinsic dimension with only 0.05% parameters of the pre-trained model. Then, to better narrow the modality gap, we propose the informative context enhancement and gated query transformation modules under extremely few parameters scenes. A thorough evaluation of the Aurora on six cross-modal downstream benchmarks shows that it not only outperforms the state-of-the-art, but even outperforms the full fine-tuning approach. Our code is available at: https://github.com/WillDreamer/Aurora.
翻译:暂无翻译