Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. With 9-fold cross-validation, compared with both traditional and deep learning baselines, we demonstrated that the proposed network obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric $K_{i}$ and $V_{b}$ images and in significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
翻译:在全机体动态PET中,主体动态PET运动引入了机体间错配和严重影响参数成像。传统的非硬性登记方法一般都是计算紧张和耗时的。深层学习方法在快速实现高精度方面很有希望,但在调查时还考虑到痕量分布的变化或整个体范围的变化。在这项工作中,我们开发了一个不受监督的、以深层次学习为基础的自动框架,以纠正机体运动。运动估计网络是一个动态神经网络,具有一个连带长短时间记忆层,充分利用动态时间特征和空间信息。我们的数据集包含27个主题,每个主题在90分钟的FDG全机全机动态PET扫描下进行。与传统和深层学习基线相比,共进行了9倍的交叉校验。我们表明,拟议的网络在质量和数量上都取得了优异性空间校准框架($ ⁇ i}和$V ⁇ b}图像和显著降低等分辨误差的图像。我们还展示了拟议的运动校正方法对估计图像的下游分析的潜力,提高了在经过培训的临床测图的模型中,从而展示了在测测测算的模型上,从而展示了在测测算的模型上的潜力。