The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing. Without a sufficient number of training samples, deep learning based models are very likely to suffer from over-fitting problem. The common solution is image manipulation such as image rotation, cropping, or resizing. Those methods can help relieve the over-fitting problem as more training samples are introduced. However, they do not really introduce new images with additional information and may lead to data leakage as the test set may contain similar samples which appear in the training set. To address this challenge, we propose to generate diverse images with generative adversarial network. In this paper, we develop a novel generative method named generative adversarial U-Net , which utilizes both generative adversarial network and U-Net. Different from existing approaches, our newly designed model is domain-free and generalizable to various medical images. Extensive experiments are conducted over eight diverse datasets including computed tomography (CT) scan, pathology, X-ray, etc. The visualization and quantitative results demonstrate the efficacy and good generalization of the proposed method on generating a wide array of high-quality medical images.
翻译:医疗图象的缺乏是医学图象计算领域的最大挑战之一。如果没有足够的培训样本,深层次的学习模型极有可能受到过于适应问题的影响。常见的解决方案是图像操纵,如图像旋转、裁剪或重新定型。这些方法有助于随着更多的培训样本的引入而缓解过分适应问题。然而,它们并不真正引入带有额外信息的新图象,并可能导致数据泄漏,因为测试集可能包含培训集中出现的类似样本。为了应对这一挑战,我们提议以基因对抗性网络生成多种图象。在本文中,我们开发了名为基因对抗性U-Net的新型基因化方法,它既使用基因对抗网络,又使用U-Net。与现有方法不同,我们新设计的模型没有领域,可以广泛应用于各种医学图象。在八个不同的数据集中进行了广泛的实验,包括计算成像学(CT)扫描、病理学、X光等。可视化和定量结果展示了生成一系列高质量医学图象的拟议方法的功效和良好概括性。