In medical image analysis, low-resolution images negatively affect the performance of medical image interpretation and may cause misdiagnosis. Single image super-resolution (SISR) methods can improve the resolution and quality of medical images. Currently, Generative Adversarial Networks (GAN) based super-resolution models have shown very good performance. Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN) is one of the practical GAN-based models which is widely used in the field of general image super-resolution. One of the challenges in medical image super-resolution is that, unlike natural images, medical images do not have high spatial resolution. To solve this problem, we can use transfer learning technique and fine-tune the model that has been trained on external datasets (often natural datasets). In our proposed approach, the pre-trained generator and discriminator networks of the Real-ESRGAN model are fine-tuned using medical image datasets. In this paper, we worked on chest X-ray and retinal images and used the STARE dataset of retinal images and Tuberculosis Chest X-rays (Shenzhen) dataset for fine-tuning. The proposed model produces more accurate and natural textures, and its outputs have better details and resolution compared to the original Real-ESRGAN outputs.
翻译:在医学图像分析中,低分辨率图像对医学图像判读的性能有负面影响,并可能造成误解。单一图像超分辨率(SISR)方法可以提高医学图像的分辨率和质量。目前,基于基因反反向网络(GAN)的超级分辨率模型表现非常良好。真正的强化超级分辨率反向网络(Real-ESRGAN)是实用的GAN模型之一,在一般图像超分辨率领域广泛使用。医学图像超分辨率(SISR)的挑战之一是,与自然图像不同,医学图像没有高空间分辨率。为解决这一问题,我们可以使用学习技术,并微调在外部数据集(通常为自然数据集)上受过培训的模型。在我们拟议的方法中,经过预先训练的Real-ESRGAN模型的生成器和导师网络正在使用医学图像数据集进行微调。在本文中,我们研究胸X光和视线图像,并使用STAR(STER)的微版图像数据集,比较原始图像和结核病XRV产出。