Three-dimensional (3D) imaging is extremely popular in medical imaging as it enables diagnosis and disease monitoring through complete anatomical coverage. Computed Tomography or Magnetic Resonance Imaging (MRI) techniques are commonly used, however, anisotropic volumes with thick slices are often acquired to reduce scan times. Deep learning (DL) can be used to recover high-resolution features in the low-resolution dimension through super-resolution reconstruction (SRR). However, this often relies on paired training data which is unavailable in many medical applications. We describe a novel approach that only requires native anisotropic 3D medical images for training. This method relies on the observation that small 2D patches extracted from a 3D volume contain similar visual features, irrespective of their orientation. Therefore, it is possible to leverage disjoint patches from the high-resolution plane, to learn SRR in the low-resolution plane. Our proposed unpaired approach uses a modified CycleGAN architecture with a cycle-consistent gradient mapping loss: Cycle Loss Augmented Degradation Enhancement (CLADE). We show the feasibility of CLADE in an exemplar application; anisotropic 3D abdominal MRI data. We demonstrate superior quantitative image quality with CLADE over supervised learning and conventional CycleGAN architectures. CLADE also shows improvements over anisotopic volumes in terms of qualitative image ranking and quantitative edge sharpness and signal-to-noise ratio. This paper demonstrates the potential of using CLADE for super-resolution reconstruction of anisotropic 3D medical imaging data without the need for paired training data.
翻译:暂无翻译