Multimodal large language models (MLLMs) have demonstrated remarkable capabilities across vision-language tasks, yet their large-scale deployment raises pressing concerns about memorized private data, outdated knowledge, and harmful content. Existing unlearning approaches for MLLMs typically adapt training-based strategies such as gradient ascent or preference optimization, but these methods are computationally expensive, irreversible, and often distort retained knowledge. In this work, we propose MLLMEraser, an input-aware, training-free framework for test-time unlearning. Our approach leverages activation steering to enable dynamic knowledge erasure without parameter updates. Specifically, we construct a multimodal erasure direction by contrasting adversarially perturbed, knowledge-recall image-text pairs with knowledge-erasure counterparts, capturing both textual and visual discrepancies. To prevent unnecessary interference, we further design an input-aware steering mechanism that adaptively determines when and how the erasure direction should be applied, preserving utility on retained knowledge while enforcing forgetting on designated content. Experiments on LLaVA-1.5 and Qwen-2.5-VL demonstrate that MLLMEraser consistently outperforms state-of-the-art MLLM unlearning baselines, achieving stronger forgetting performance with lower computational cost and minimal utility degradation.
翻译:多模态大语言模型(MLLMs)在视觉语言任务中展现出卓越能力,但其大规模部署引发了关于记忆的私有数据、过时知识和有害内容的紧迫担忧。现有的MLLM遗忘方法通常采用基于训练的策略,如梯度上升或偏好优化,但这些方法计算成本高、不可逆,且常常扭曲保留的知识。在本工作中,我们提出了MLLMEraser,一个输入感知的、无需训练的测试时遗忘框架。我们的方法利用激活导向实现动态知识擦除,而无需参数更新。具体而言,我们通过对比对抗性扰动的知识回忆图像-文本对与知识擦除对应物,构建了一个多模态擦除方向,从而捕捉文本和视觉差异。为防止不必要的干扰,我们进一步设计了一个输入感知的导向机制,该机制自适应地决定何时以及如何应用擦除方向,从而在保留知识上保持效用,同时对指定内容强制遗忘。在LLaVA-1.5和Qwen-2.5-VL上的实验表明,MLLMEraser持续优于最先进的MLLM遗忘基线方法,以更低的计算成本和最小的效用退化实现了更强的遗忘性能。