Convolutional networks (ConvNets) have achieved promising accuracy for various anatomical segmentation tasks. Despite the success, these methods can be sensitive to data appearance variations. Considering the large variability of scans caused by artifacts, pathologies, and scanning setups, robust ConvNets are vital for clinical applications, while have not been fully explored. In this paper, we propose to mitigate the challenge by enabling ConvNets' awareness of the underlying anatomical invariances among imaging scans. Specifically, we introduce a fully convolutional Constraint Adoption Module (CAM) that incorporates probabilistic atlas priors as explicit constraints for predictions over a locally connected Conditional Random Field (CFR), which effectively reinforces the anatomical consistency of the labeling outputs. We design the CAM to be flexible for boosting various ConvNet, and compact for co-optimizing with ConvNets for fusion parameters that leads to the optimal performance. We show the advantage of such atlas priors fusion is two-fold with two brain parcellation tasks. First, our models achieve state-of-the-art accuracy among ConvNet-based methods on both datasets, by significantly reducing structural abnormalities of predictions. Second, we can largely boost the robustness of existing ConvNets, proved by: (i) testing on scans with synthetic pathologies, and (ii) training and evaluation on scans of different scanning setups across datasets. Our method is proposing to be easily adopted to existing ConvNets by fine-tuning with CAM plugged in for accuracy and robustness boosts.
翻译:革命网络( Convillal Nets) 已经为各种解剖分解任务实现了很有希望的准确性。 尽管取得了成功, 这些方法对数据外观的变化十分敏感。 考虑到由人工制品、病理学和扫描设置导致的扫描变化很大, 强力的ConvNet对于临床应用至关重要, 但是还没有得到充分探索。 在本文件中, 我们提议通过让ConvNets能够了解成像扫描中内在解剖变量的内在差异来减轻挑战。 具体地说, 我们引入了一个完全快速的网络吸附模块( CAM), 其中包括概率图解图案前期, 作为对本地连接的调控场( CFR) 预测的明显限制, 从而有效地加强标签输出输出的解剖一致性。 我们设计ConvilNet 网络能灵活地提升各种ConvNet, 并联合利用Convregil 网络的组合参数实现最佳性参数。 我们展示了这种先前的精度组合的优势, 与两个大脑的精度扫描任务有两重的精度。 首先, 我们的模型能够大大降低目前对Conval- amalalalalalalalation的精度, 的精度, 的精度, 我们的精度, 通过现有的的精度测试, 我们的精度, 我们的精度的精度, 通过现有的的精度的精度, 能够使现有的精度的精度的精度的精度, 我们的精度, 我们的精度的精度的精度, 通过现有的精度, 我们的精度的精度, 我们的精度, 的精度, 通过现有的的精度, 我们的精度的精度的精度在现有的的精度的精度的精度, 的精度的精度的精度的精度, 度, 度, 度, 度测试的精度测试的精度测试的精度测试的精度测试的精度测试的精度。