Deep learning based medical imaging classification models usually suffer from the domain shift problem, where the classification performance drops when training data and real-world data differ in imaging equipment manufacturer, image acquisition protocol, patient populations, etc. We propose Feature Centroid Contrast Learning (FCCL), which can improve target domain classification performance by extra supervision during training with contrastive loss between instance and class centroid. Compared with current unsupervised domain adaptation and domain generalization methods, FCCL performs better while only requires labeled image data from a single source domain and no target domain. We verify through extensive experiments that FCCL can achieve superior performance on at least three imaging modalities, i.e. fundus photographs, dermatoscopic images, and H & E tissue images.
翻译:深度学习医学成像分类模型通常受到领域转移问题的影响,当培训数据和真实世界数据在成像设备制造商、图像获取协议、病人等方面存在差异时,分类性能下降。我们建议“特效中心对比学习”(FCCL)通过在培训期间进行额外监督,提高目标域分类性能,在实例和类类中机器人之间造成差异性损失。与目前未受监督的域适应和域域通用方法相比,FCCL表现更好,而只要求单一源域和无目标域的标签图像数据。我们通过广泛的实验核实,FCCL至少可以在三种成像模式上取得优异性,即Fundus照片、dermatoscopic图像以及H & E组织图像。