Low Dose Computed Tomography (LDCT) is widely used as an imaging solution to aid diagnosis and other clinical tasks. However, this comes at the price of a deterioration in image quality due to the low dose of radiation used to reduce the risk of secondary cancer development. While some efficient methods have been proposed to enhance LDCT quality, many overestimate noise and perform excessive smoothing, leading to a loss of critical details. In this paper, we introduce D-PerceptCT, a novel architecture inspired by key principles of the Human Visual System (HVS) to enhance LDCT images. The objective is to guide the model to enhance or preserve perceptually relevant features, thereby providing radiologists with CT images where critical anatomical structures and fine pathological details are perceptu- ally visible. D-PerceptCT consists of two main blocks: 1) a Visual Dual-path Extractor (ViDex), which integrates semantic priors from a pretrained DINOv2 model with local spatial features, allowing the network to incorporate semantic-awareness during enhancement; (2) a Global-Local State-Space block that captures long-range information and multiscale features to preserve the important structures and fine details for diagnosis. In addition, we propose a novel deep perceptual loss, designated as the Deep Perceptual Relevancy Loss Function (DPRLF), which is inspired by human contrast sensitivity, to further emphasize perceptually important features. Extensive experiments on the Mayo2016 dataset demonstrate the effectiveness of D-PerceptCT method for LDCT enhancement, showing better preservation of structural and textural information within LDCT images compared to SOTA methods.
翻译:低剂量计算机断层扫描(LDCT)作为一种辅助诊断及其他临床任务的成像解决方案被广泛应用。然而,由于采用低辐射剂量以降低继发性癌症风险,这导致了图像质量的下降。尽管已有一些有效方法被提出以提升LDCT质量,但许多方法会高估噪声并执行过度平滑,导致关键细节丢失。本文中,我们提出了D-PerceptCT,一种受人类视觉系统(HVS)关键原理启发的新型架构,用于增强LDCT图像。其目标在于引导模型增强或保留感知相关的特征,从而为放射科医生提供关键解剖结构和细微病理细节在感知上清晰可见的CT图像。D-PerceptCT包含两个主要模块:1)视觉双路径提取器(ViDex),它整合了预训练DINOv2模型的语义先验与局部空间特征,使网络在增强过程中融入语义感知;2)全局-局部状态空间模块,用于捕获长程信息和多尺度特征,以保留诊断所需的重要结构和精细细节。此外,我们提出了一种新颖的深度感知损失函数,称为深度感知相关性损失函数(DPRLF),其灵感来源于人类对比敏感度,以进一步强调感知上重要的特征。在Mayo2016数据集上的大量实验证明了D-PerceptCT方法在LDCT增强方面的有效性,相较于现有最优(SOTA)方法,其在LDCT图像中更好地保留了结构和纹理信息。