Semantic medical image segmentation using deep learning has recently achieved high accuracy, making it appealing to clinical problems such as radiation therapy. However, the lack of high-quality semantically labelled data remains a challenge leading to model brittleness to small shifts to input data. Most works require extra data for semi-supervised learning and lack the interpretability of the boundaries of the training data distribution during training, which is essential for model deployment in clinical practice. We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data by simultaneously constructing an explorable manifold during training. The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training. The discriminator is generalised to small domain shifts as much as permissible by the training data, and the generator automatically diversifies the training samples using a manifold of input features learnt during segmentation. All the while, the discriminator guides the manifold learning by supervising the semantic content and fine-grained features separately during the image diversification. After training, visualisation of the learnt manifold from the generator is available to interpret the model limits. Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods while being more explainable using an explorable manifold.
翻译:最近,利用深层学习的语义医学图像分解方法取得了很高的准确性,因此它吸引了辐射治疗等临床问题。然而,缺乏高质量的语义标签数据仍然是一项挑战,导致模型萎缩,向输入数据略有转变。大多数工作都需要半监督学习的额外数据,培训期间的培训数据分配界限也缺乏可解释性,这是临床实践中模型部署所必不可少的。我们提议一个充分监督的基因化框架,通过在培训期间单独建造可探测的多元体,只通过有标签的数据实现可概括性分解。拟议方法创建了医学图像样式,配以分解任务驱动的分解分析器,纳入端到端的对抗性培训培训。在培训数据允许的限度内,歧视器被概括为小域转移,而发电机自动稀释培训样本时使用在分解过程中所学的多个输入特征。尽管如此,在图像多样化期间,歧视器通过监督可描述的语义内容和细化特性来指导多重学习过程。在培训后,从发电机中学习的可视化式可视化的元件,而现在可以更充分地解释其他可展示的数据范围限制。</s>