Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains. However, while the self-training UDA has demonstrated its effectiveness on discriminative tasks, such as classification and segmentation, via the reliable pseudo-label selection based on the softmax discrete histogram, the self-training UDA for generative tasks, such as image synthesis, is not fully investigated. In this work, we propose a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis. Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. The fast test-time adaptation is achieved by a round-based alternative optimization scheme. We validated our framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis problem, where datasets in the source and target domains were acquired from different scanners or centers. Extensive validations were carried out to verify our framework against popular adversarial training UDA methods. Results show that our GST, with tagged MRI of test subjects in new target domains, improved the synthesis quality by a large margin, compared with the adversarial training UDA methods.
翻译:以不受监督的自我培训为基础、以不受监督的域适应(UDA)为基础、自我培训的UDA在对无标签目标领域应用经过训练的源域深深学习模型时,显示了解决域变问题的巨大潜力;然而,虽然自我培训的UDA通过基于软麦松离散直方图的可靠的假标签选择,展示了其在分类和分化等歧视性任务方面的效力,例如分类和分化;没有全面调查用于图像合成等基因化任务的自我培训UDA自我培训(GST)综合问题;在这项工作中,我们提议建立一个具有连续价值预测和回归目标的UDA(GST)自我培训(GST)框架;具体地说,我们提议用不确定性遮罩过滤假标签,用实用的变异性学习将生成图像的预测信心量化;快速测试时间适应是通过一个基于圆基的替代优化计划实现的。 我们验证了我们关于贴上到硅磁共振成成成像成像(MRI)合成(MRI)的综合问题框架,从不同的扫描器或中心获取了源和目标域域中的数据集;进行广泛的验证,在大型的M-DA模型上,通过测试域中用大型的模型校正比值校校校校校校校校校校校校校校校校校校校校校校校校校校校校校对了我们。