Explainability of deep neural networks is one of the most challenging and interesting problems in the field. In this study, we investigate the topic focusing on the interpretability of deep learning-based registration methods. In particular, with the appropriate model architecture and using a simple linear projection, we decompose the encoding space, generating a new basis, and we empirically show that this basis captures various decomposed anatomically aware geometrical transformations. We perform experiments using two different datasets focusing on lungs and hippocampus MRI. We show that such an approach can decompose the highly convoluted latent spaces of registration pipelines in an orthogonal space with several interesting properties. We hope that this work could shed some light on a better understanding of deep learning-based registration methods.
翻译:深神经网络的可解释性是该领域最具有挑战性和最有趣的问题之一。在本研究中,我们调查了侧重于深层次学习登记方法可解释性的专题。特别是,通过适当的模型结构并使用简单的线性投影,我们分解编码空间,创造新的基础,我们从经验上表明,这一基础可以捕捉各种分解解解解解解解解解解解解解解解解解解解的几何学变异。我们用两个不同的数据集进行实验,重点是肺部和河马坎普斯MRI。我们证明,这样一种方法可以将一个具有若干有趣特性的正方形空间高度混杂的注册管道的潜在空间分解。我们希望这项工作能够为更好地了解深层次的基于学习的注册方法提供一些启示。