Multiple instance learning (MIL) has emerged as a powerful framework for weakly supervised whole slide image (WSI) classification, enabling slide-level predictions without requiring detailed patch-level annotations. However, a key limitation of MIL lies in the underexplored potential of pre-training the MIL aggregator. Most existing approaches train it from scratch, resulting in performance heavily dependent on the number of labeled WSIs, while overlooking the abundance of unlabeled WSIs available in real-world scenarios. To address this, we propose PreMix, a novel framework that leverages a non-contrastive pre-training method, Barlow Twins, augmented with the Slide Mixing approach to generate additional positive pairs and enhance feature learning, particularly under limited labeled WSI conditions. Fine-tuning with Mixup and Manifold Mixup further enhances robustness by effectively handling the diverse sizes of gigapixel WSIs. Experimental results demonstrate that integrating HIPT into PreMix achieves an average F1 improvement of 4.7% over the baseline HIPT across various WSI training datasets and label sizes. These findings underscore its potential to advance WSI classification with limited labeled data and its applicability to real-world histopathology practices. The code is available at https://anonymous.4open.science/r/PreMix
翻译:暂无翻译