Deep Learning (DL) based methods for magnetic resonance (MR) image reconstruction have been shown to produce superior performance in recent years. However, these methods either only leverage under-sampled data or require a paired fully-sampled auxiliary modality to perform multi-modal reconstruction. Consequently, existing approaches neglect to explore attention mechanisms that can transfer textures from reference fully-sampled data to under-sampled data within a single modality, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction, in which we formulate the under-sampled data and reference data as queries and keys in a transformer. The TTM facilitates joint feature learning across under-sampled and reference data, so the feature correspondences can be discovered by attention and accurate texture features can be leveraged during reconstruction. Notably, the proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance. Extensive experiments show that TTM can significantly improve the performance of several popular DL-based MRI reconstruction methods.
翻译:近年来,基于深层学习(DL)的磁共振图像重建方法显示,这些方法在磁共振图像重建方面产生优异的性能,然而,这些方法要么只是利用抽样不足的数据,要么需要配对的全抽样辅助模式来进行多式重建,因此,现有方法忽视了探索能够将参考的全抽样数据质谱从参考数据转移到抽样不足的数据的注意机制,这种机制限制了在具有挑战性的案件中采用这些方法;在本文件中,我们提议建立一个新型的Texture变异器模块,用于加速磁共振重建,我们在这个模块中,我们将低抽样数据和参考数据作为变异器中的查询和钥匙。TTM促进在抽样不足和参考数据之间联合学习特征,这样就可以在重建过程中通过注意发现特征通信,而精确的文本特征特征特征可以被利用。值得注意的是,拟议的TTM可以堆积在先前的MRI重建方法上,以进一步提高其性能。广泛的实验表明,TTM可以显著改进一些流行的基于DL的MRI重建方法的性能。