Since data scarcity and data heterogeneity are prevailing for medical images, well-trained Convolutional Neural Networks (CNNs) using previous normalization methods may perform poorly when deployed to a new site. However, a reliable model for real-world applications should be able to generalize well both on in-distribution (IND) and out-of-distribution (OOD) data (e.g., the new site data). In this study, we present a novel normalization technique called window normalization (WIN), which is a simple yet effective alternative to existing normalization methods. Specifically, WIN perturbs the normalizing statistics with the local statistics computed on a window of features. This feature-level augmentation technique regularizes the models well and improves their OOD generalization significantly. Taking its advantage, we propose a novel self-distillation method called WIN-WIN to further improve the OOD generalization in classification. WIN-WIN is easily implemented with twice forward passes and a consistency constraint, which can be a simple extension for existing methods. Extensive experimental results on various tasks (such as glaucoma detection, breast cancer detection, chromosome classification, optic disc and cup segmentation, etc.) and datasets (26 datasets) demonstrate the generality and effectiveness of our methods. The code is available at https://github.com/joe1chief/windowNormalizaion.
翻译:由于医疗图像中普遍存在数据稀缺和数据差异性,使用以前正常化方法的训练有素的革命神经网络(CNNs)在部署到新站点时可能表现不佳,然而,真实世界应用的可靠模型应能在分布(IND)和分配外(OOOD)数据(例如新的站点数据)方面广泛推广。在本研究中,我们介绍了一种新型的正常化技术,称为窗口正常化(WIN),这是现有正常化方法的一个简单而有效的替代方法。具体地说,WIN通过在功能窗口中计算的地方统计数据对统计数据的正常化进行渗透。这种功能级增强技术使模型规范化,大大改进OOOD的通用化。我们提出一种叫WIN-WIN的新的自我消化方法,以进一步改进OOD在分类中的概括化。WINWIN-WIN很容易得到两次前方通行证和一致性的制约,这是现有方法的一个简单的延伸。关于各种任务(如光学检测、乳腺癌检测、染色体/内衣质)的广泛实验结果,显示我们的一般数据分类和数据分类。