Three-dimensional (3D) imaging is popular in medical applications, however, anisotropic 3D volumes with thick, low-spatial-resolution slices are often acquired to reduce scan times. Deep learning (DL) offers a solution to recover high-resolution features through super-resolution reconstruction (SRR). Unfortunately, paired training data is unavailable in many 3D medical applications and therefore we propose a novel unpaired approach; CLADE (Cycle Loss Augmented Degradation Enhancement). CLADE uses a modified CycleGAN architecture with a cycle-consistent gradient mapping loss, to learn SRR of the low-resolution dimension, from disjoint patches of the high-resolution plane within the anisotropic 3D volume data itself. We show the feasibility of CLADE in abdominal MRI and abdominal CT and demonstrate significant improvements in CLADE image quality over low-resolution volumes and state-of-the-art self-supervised SRR; SMORE (Synthetic Multi-Orientation Resolution Enhancement). Quantitative PIQUE (qualitative perception-based image quality evaluator) scores and quantitative edge sharpness (ES - calculated as the maximum gradient of pixel intensities over a border of interest), showed superior performance for CLADE in both MRI and CT. Qualitatively CLADE had the best overall image quality and highest perceptual ES over the low-resolution volumes and SMORE. This paper demonstrates the potential of using CLADE for super-resolution reconstruction of anisotropic 3D medical imaging data without the need for paired 3D training data.
翻译:暂无翻译