Whole Slide Image (WSI) understanding is fundamentally challenging due to its gigapixel scale and the extreme sparsity of diagnostically relevant regions. Unlike human experts who primarily rely on key areas to arrive at a diagnosis, existing slide-level multimodal large language models (MLLMs) for pathology rely on heavy slide-level encoders that process thousands of patch features in a brute-force manner, resulting in excessive computational cost. In this work, we revisit the WSI-language modeling paradigm and show that tile-level features exhibit strong global and local redundancy, whereas only a small subset of tiles are truly task-relevant. Motivated by this observation, we introduce an efficient MLLM framework, called LoC-Path, that replaces the expensive slide-level encoder with redundancy-reducing modules. We first design a Sparse Token Merger (STM) and an MAE-pretrained resampler to remove local redundancy and compress globally redundant tile tokens into a compact slide-level representation set. We then propose a Cross-Attention Routing Adapter (CARA) and a Token Importance Scorer (TIS) to integrate the compressed visual representation with the language model in a computation-efficient manner. Extensive experiments demonstrate that our approach achieves performance comparable to existing state-of-the-art whole-slide MLLMs, while requiring significantly lower computation and memory.
翻译:全切片图像(Whole Slide Image, WSI)理解因其千兆像素级的规模和诊断相关区域的极端稀疏性而具有根本性挑战。与主要依赖关键区域进行诊断的人类专家不同,现有用于病理学的切片级多模态大语言模型(Multimodal Large Language Models, MLLMs)依赖于处理数千个图像块特征的笨重切片级编码器,以暴力方式进行计算,导致过高的计算成本。本研究重新审视了WSI-语言建模范式,发现图像块级特征表现出强烈的全局与局部冗余性,而实际上仅有少量图像块与任务真正相关。基于这一观察,我们提出了一种高效的MLLM框架,称为LoC-Path,该框架用冗余减少模块替代了昂贵的切片级编码器。我们首先设计了一个稀疏令牌合并器(Sparse Token Merger, STM)和一个基于MAE预训练的重采样器,以消除局部冗余并将全局冗余的图像块令牌压缩为紧凑的切片级表示集合。随后,我们提出了交叉注意力路由适配器(Cross-Attention Routing Adapter, CARA)和令牌重要性评分器(Token Importance Scorer, TIS),以计算高效的方式将压缩后的视觉表示与语言模型集成。大量实验表明,我们的方法在达到与现有最先进全切片MLLMs相当性能的同时,显著降低了计算和内存需求。