The purpose of this work is to contribute to the state of the art of deep-learning methods for diffeomorphic registration. We propose an adversarial learning LDDMM method for pairs of 3D mono-modal images based on Generative Adversarial Networks. The method is inspired by the recent literature for deformable image registration with adversarial learning. We combine the best performing generative, discriminative, and adversarial ingredients from the state of the art within the LDDMM paradigm. We have successfully implemented two models with the stationary and the EPDiff-constrained non-stationary parameterizations of diffeomorphisms. Our unsupervised and data-hungry approach has shown a competitive performance with respect to a benchmark supervised and rich-data approach. In addition, our method has shown similar results to model-based methods with a computational time under one second.
翻译:这项工作的目的是促进深层学习方法的先进程度,以便进行地貌变异登记; 我们提议基于创用反对立方网络的3D单模图像配对的对抗性学习LDDMM方法; 该方法受最近关于变形图像登记文献的启发,与对抗性学习相结合; 我们结合了LDDMM模式中最有效果的基因化、歧视性和对抗性成分; 我们成功地采用了两种模式,既采用固定模式,又采用EPDiff控制的地貌变异非静止参数; 我们的无监督和数据饥饿方法在基准监督和丰富数据方法方面表现出了竞争性的绩效; 此外,我们的方法也显示了与模型方法类似的结果,计算时间不到一秒。