Purpose: The purpose of this paper is to present a method for real-time 2D-3D non-rigid registration using a single fluoroscopic image. Such a method can find applications in surgery, interventional radiology and radiotherapy. By estimating a three-dimensional displacement field from a 2D X-ray image, anatomical structures segmented in the preoperative scan can be projected onto the 2D image, thus providing a mixed reality view. Methods: A dataset composed of displacement fields and 2D projections of the anatomy is generated from the preoperative scan. From this dataset, a neural network is trained to recover the unknown 3D displacement field from a single projection image. Results: Our method is validated on lung 4D CT data at different stages of the lung deformation. The training is performed on a 3D CT using random (non domain-specific) diffeomorphic deformations, to which perturbations mimicking the pose uncertainty are added. The model achieves a mean TRE over a series of landmarks ranging from 2.3 to 5.5 mm depending on the amplitude of deformation. Conclusion: In this paper, a CNN-based method for real-time 2D-3D non-rigid registration is presented. This method is able to cope with pose estimation uncertainties, making it applicable to actual clinical scenarios, such as lung surgery, where the C-arm pose is planned before the intervention.
翻译:目的:本文旨在提出一种使用单个荧光成像图像实现实时 2D-3D 非刚性配准的方法。这种方法可以应用于手术、介入放射学和放射治疗等领域。通过从 2D X 射线图像中估计三维位移场,可以将术前扫描中分割的解剖结构投影到 2D 图像上,从而提供混合现实视图。
方法:从术前扫描中生成由位移场和解剖的 2D 投影组成的数据集。从这个数据集,训练一个神经网络以从单个投影图像中恢复未知的 3D 位移场。
结果:我们的方法在不同肺变形阶段的肺部 4D CT 数据上进行验证。训练使用随机(非特定于领域)的微分变形,在其基础上添加模拟姿势不确定性的扰动。该模型在一系列特征点的平均 TRE(平均距离误差)范围内为 2.3 至 5.5 毫米,具有良好的性能。
结论:本文提出了一种基于卷积神经网络的实时 2D-3D 非刚性配准方法。该方法能够应对姿态估计的不确定性,因此适用于实际的临床场景,例如肺部手术,在该场景中 C 型臂的姿态是在介入前规划好的。