Learned iterative reconstruction algorithms for inverse problems offer the flexibility to combine analytical knowledge about the problem with modules learned from data. This way, they achieve high reconstruction performance while ensuring consistency with the measured data. In computed tomography, extending such approaches from 2D fan-beam to 3D cone-beam data is challenging due to the prohibitively high GPU memory that would be needed to train such models. This paper proposes to use neural ordinary differential equations to solve the reconstruction problem in a residual formulation via numerical integration. For training, there is no need to backpropagate through several unrolled network blocks nor through the internals of the solver. Instead, the gradients are obtained very memory-efficiently in the neural ODE setting allowing for training on a single consumer graphics card. The method is able to reduce the root mean squared error by over 30% compared to the best performing classical iterative reconstruction algorithm and produces high quality cone-beam reconstructions even in a sparse view scenario.
翻译:学习反向问题的迭代重建算法可以灵活地将问题的分析知识与从数据中学习的模块结合起来。 这样, 它们可以在确保与测量数据的一致性的同时实现高重建性能。 在计算断层摄影中, 将2D扇光束数据扩展至 3D 锥光束数据具有挑战性, 因为培训这些模型需要的GPU内存太高, 使得这种方法具有挑战性。 本文建议使用神经普通差异方程式, 通过数字集成的剩余方程式解决重建问题。 对于培训来说, 不需要通过几个未滚动的网络块或者通过解析器的内部来进行反向分析。 相反, 梯度在神经极值设置中以非常高的记忆效率获得, 从而能够对单一消费者图形卡进行培训。 这种方法能够将根平均值的正方差减少30%以上, 与最能执行的经典迭接重算法相比, 并产生高质量的锥波束重建。