We propose to adapt segmentation networks with a constrained formulation, which embeds domain-invariant prior knowledge about the segmentation regions. Such knowledge may take the form of simple anatomical information, e.g., structure size or shape, estimated from source samples or known a priori. Our method imposes domain-invariant inequality constraints on the network outputs of unlabeled target samples. It implicitly matches prediction statistics between target and source domains with permitted uncertainty of prior knowledge. We address our constrained problem with a differentiable penalty, fully suited for standard stochastic gradient descent approaches, removing the need for computationally expensive Lagrangian optimization with dual projections. Unlike current two-step adversarial training, our formulation is based on a single loss in a single network, which simplifies adaptation by avoiding extra adversarial steps, while improving convergence and quality of training. The comparison of our approach with state-of-the-art adversarial methods reveals substantially better performance on the challenging task of adapting spine segmentation across different MRI modalities. Our results also show a robustness to imprecision of size priors, approaching the accuracy of a fully supervised model trained directly in a target domain.Our method can be readily used for various constraints and segmentation problems.
翻译:我们建议以限制的配方来调整分解网络,这种配方含有关于分解区域的域变先前知识;这种知识可以采取简单的解剖信息形式,例如结构大小或形状,根据来源样品或已知的先验性来估计。我们的方法对未贴标签的目标样品的网络产出施加了域变不平等限制;它隐含地将目标和源域之间的预测统计数据与先前知识允许的不确定性相匹配;我们用一种不同的惩罚来解决我们的受限问题,完全适合标准的随机梯度梯度下降方法,消除计算费用昂贵的拉格兰加平整的需要,同时进行双重预测。与目前的两步对抗性培训不同,我们的配方是基于单一网络的单一损失,通过避免额外的对抗步骤简化适应,同时改进培训的趋同和质量。我们的方法与最先进的对抗性方法的比较表明,在适应不同MRI模式的脊分解这一富有挑战性的任务方面表现得更好。我们的结果还表明,在计算前的大小时,要保持精确性,接近完全受监督的模型的精确度,在完全受监督的域段使用的方法上,可以直接加以监测。