Deep Learning-based 2D/3D registration enables fast, robust, and accurate X-ray to CT image fusion when large annotated paired datasets are available for training. However, the need for paired CT volume and X-ray images with ground truth registration limits the applicability in interventional scenarios. An alternative is to use simulated X-ray projections from CT volumes, thus removing the need for paired annotated datasets. Deep Neural Networks trained exclusively on simulated X-ray projections can perform significantly worse on real X-ray images due to the domain gap. We propose a self-supervised 2D/3D registration framework combining simulated training with unsupervised feature and pixel space domain adaptation to overcome the domain gap and eliminate the need for paired annotated datasets. Our framework achieves a registration accuracy of 1.83$\pm$1.16 mm with a high success ratio of 90.1% on real X-ray images showing a 23.9% increase in success ratio compared to reference annotation-free algorithms.
翻译:2D/3D深度学习登记使在培训时提供大型附加说明的配对数据集时,能够快速、稳健和准确的X光到CT图像聚合。然而,需要配对CT卷和X光图像并进行地面真相登记限制了干预情景的适用性。另一种办法是使用来自CT卷的模拟X光预测,从而不再需要配对附加说明数据集。专门进行模拟X光预测培训的深海神经网络由于域空隙,在真实X光图像上的表现可能大大恶化。我们提议一个自我监督的2D/3D登记框架,将模拟培训与不受监督的特性和像素空间域适应结合起来,以克服域间差距,消除配对附加说明数据集的需要。我们的框架在真实X光图像上实现了1.83美元/pm1.16毫米的登记准确性,成功率高达90.1%,显示与参考无说明算法相比成功率提高了23.9%。