In Vision Language Models (VLMs), vision tokens are quantity-heavy yet information-dispersed compared with language tokens, thus consume too much unnecessary computation. Pruning redundant vision tokens for high VLM inference efficiency has been continuously studied but all existing methods resort to indirect and non-guaranteed ways. We propose OC-VTP, a direct and guaranteed approach to select the most representative vision tokens for high-efficiency yet accuracy-preserving VLM inference. Our OC-VTP requires merely light-weight pre-training of a small object-centric vision token pruner, which can then be inserted into existing VLMs, without fine-tuning of any models on any datasets. It is gauranteed that the most representative vision tokens are kept by minimizing the error in reconstructing the original unpruned tokens from the selected ones. Across any vision pruning ratios, i.e., inference efficiency, our OC-VTP consistently helps mainstream VLMs to preserve the highest inference accuracy. Our pruning also demonstrates interesting interpretability. Our codes are available at https://github.com/GarryLarry010131/OC-VTP.
翻译:在视觉语言模型(VLMs)中,与语言令牌相比,视觉令牌数量庞大但信息分散,因此消耗了过多不必要的计算资源。为提升VLM推理效率而剪枝冗余视觉令牌的研究持续进行,但现有方法均依赖于间接且无法保证的方式。我们提出OC-VTP,一种直接且可保证的方法,用于选择最具代表性的视觉令牌,以实现高效且保持精度的VLM推理。我们的OC-VTP仅需轻量级预训练一个小型以对象为中心的视觉令牌剪枝器,即可插入现有VLMs中,无需在任何数据集上对任何模型进行微调。通过最小化从所选令牌重建原始未剪枝令牌的误差,可保证保留最具代表性的视觉令牌。在任何视觉剪枝比率(即推理效率)下,我们的OC-VTP均能持续帮助主流VLMs保持最高的推理精度。我们的剪枝方法还展现出有趣的可解释性。代码发布于https://github.com/GarryLarry010131/OC-VTP。