This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction (Cone Beam Computed Tomography) that requires no external training data. Specifically, the desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network. We synthesize projections discretely and train the network by minimizing the error between real and synthesized projections. A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details. This encoder outperforms the commonly used frequency-domain encoder in terms of having higher performance and efficiency, because it exploits the smoothness and sparsity of human organs. Experiments have been conducted on both human organ and phantom datasets. The proposed method achieves state-of-the-art accuracy and spends reasonably short computation time.
翻译:本文提出了一种新颖的、快速的自我监督的解决方案,用于小视 CBCT 重建(Cone Beam compected Tomagraphy),不需要外部培训数据。 具体地说, 理想的衰减系数是三维空间坐标的连续功能, 由完全连接的深神经网络进行参数化。 我们通过将真实和综合的预测之间的错误最小化, 将预测进行独立整合和培训网络。 采用了一种包含散列编码的学习编码器, 以帮助网络捕捉高频细节。 这个编码器在性能和效率方面超越了常用的频域编码器, 因为它利用了人体器官的光滑度和宽度。 在人体器官和幻影数据集上都进行了实验。 拟议的方法达到了最新水平的精确度, 并且花费了合理的较短的计算时间。