Histopathology whole slide images (WSIs) can reveal significant inter-hospital variability such as illumination, color or optical artifacts. These variations, caused by the use of different scanning protocols across medical centers (staining, scanner), can strongly harm algorithms generalization on unseen protocols. This motivates development of new methods to limit such drop of performances. In this paper, to enhance robustness on unseen target protocols, we propose a new test-time data augmentation based on multi domain image-to-image translation. It allows to project images from unseen protocol into each source domain before classifying them and ensembling the predictions. This test-time augmentation method results in a significant boost of performances for domain generalization. To demonstrate its effectiveness, our method has been evaluated on 2 different histopathology tasks where it outperforms conventional domain generalization, standard H&E specific color augmentation/normalization and standard test-time augmentation techniques. Our code is publicly available at https://gitlab.com/vitadx/articles/test-time-i2i-translation-ensembling.
翻译:病理学上整个幻灯片图像(WSIS)能够揭示出重大的医院间变化,如照明、颜色或光学文物等。这些变化是由各医疗中心使用不同的扫描协议(保存、扫描)造成的,可能严重危害对隐性协议的概括算法。这促使开发新的方法来限制这种性能的下降。在本文中,为了加强对看不见目标协议的稳健性,我们提议以多域图像到图像翻译为基础的新的测试时间数据增强。它允许在对各种源域进行分类和编组预测之前,将来自隐性协议的图像投射到每个源域。这种测试时间增强方法可以大大提升域化的性能。为了证明其有效性,我们的方法已经评估了两种不同的病理学任务,即它比常规域通用、标准H&E特定颜色增强/正常化和标准测试-时间增强技术要强。我们的代码在https://gitlab.com/vitadx/art-timetime-i2-translation-ensbing.