Deep learning (DL) is gaining popularity as a parameter estimation method for quantitative MRI. A range of competing implementations have been proposed, relying on either supervised or self-supervised learning. Self-supervised approaches, sometimes referred to as unsupervised, have been loosely based on auto-encoders, whereas supervised methods have, to date, been trained on groundtruth labels. These two learning paradigms have been shown to have distinct strengths. Notably, self-supervised approaches have offered lower-bias parameter estimates than their supervised alternatives. This result is counterintuitive - incorporating prior knowledge with supervised labels should, in theory, lead to improved accuracy. In this work, we show that this apparent limitation of supervised approaches stems from the naive choice of groundtruth training labels. By training on labels which are deliberately not groundtruth, we show that the low-bias parameter estimation previously associated with self-supervised methods can be replicated - and improved on - within a supervised learning framework. This approach sets the stage for a single, unifying, deep learning parameter estimation framework, based on supervised learning, where trade-offs between bias and variance are made by careful adjustment of training label.
翻译:深度学习(DL)作为定量磁共振的参数估计方法越来越受欢迎。 已经提出了一系列相互竞争的执行,依靠监督或自我监督的学习。 自我监督的方法(有时被称为无人监督的学习)在很大程度上以自动阅读器为基础,而监督的方法迄今一直以地面真实标签方式培训。 这两种学习范式已证明具有不同优势。 值得注意的是,自我监督的方法提供了比监督的替代方法低位参数估计。 这一结果是反直觉的:在理论上,将先前对受监督标签的了解纳入到监督标签中可以提高准确性。 在这项工作中,我们表明,监督方法的这种明显局限性源于天真地选择地面真实培训标签。 通过对有意不是地面真实标签的标签进行培训,我们表明,以前与自我监督方法相关的低偏差参数估计可以在受监督的学习框架内复制和改进。 这种方法为单一、统一、深入学习的参数估算框架设置了一个阶段, 其基础是审慎地学习, 进行贸易偏差的调整和标签之间的差异。