Novel multimodal imaging methods are capable of generating extensive, super high resolution datasets for preclinical research. Yet, a massive lack of annotations prevents the broad use of deep learning to analyze such data. So far, existing generative models fail to mitigate this problem because of frequent labeling errors. In this paper, we introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours. We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor. The generated images yield significant quantitative improvement compared to existing methods. To validate the quality of synthesis, we train segmentation networks on a dataset augmented with the synthetic data, substantially improving the segmentation over baseline.
翻译:新式多式联运成像方法能够产生广泛、超高分辨率的临床前研究数据集,然而,由于大量缺乏说明,无法广泛利用深层学习来分析这些数据。到目前为止,现有的基因模型由于频繁的标签错误而无法缓解这一问题。在本文中,我们引入了一种新的基因化方法,利用真实的解剖信息产生现实的图象标签肿瘤配对。我们为解剖图象和标签建造了双路径生成器,经过循环一致的设置培训,受到独立、预先训练的分解器的制约。产生的图象与现有方法相比在数量上有很大的改进。为了验证合成质量,我们用合成数据强化的数据集培训分解网络,大大改进了基线的分解。