As information sources are usually imperfect, it is necessary to take into account their reliability in multi-source information fusion tasks. In this paper, we propose a new deep framework allowing us to merge multi-MR image segmentation results using the formalism of Dempster-Shafer theory while taking into account the reliability of different modalities relative to different classes. The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module, which assigns a vector of discount rates to each modality evidence and combines the discounted evidence using Dempster's rule. The whole framework is trained by minimizing a new loss function based on a discounted Dice index to increase segmentation accuracy and reliability. The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors. Quantitative and qualitative results show that our method outperforms the state of the art, and implements an effective new idea for merging multi-information within deep neural networks.
翻译:由于信息来源通常是不完善的,因此有必要考虑到信息源在多源信息聚合任务中的可靠性。在本文件中,我们提出一个新的深层次框架,允许我们利用Dempster-Shafer理论的形式主义,同时考虑到不同模式相对于不同类别所具有的可靠性,将多兆米图像分离结果合并在一起。框架由编码器脱coder特征提取模块、计算每种模式的每个 voxel信仰功能的证据分离模块以及多模式证据聚合模块组成,该模块为每一种模式的证据分配了一个折现率矢量,并利用Dempster规则将折扣证据合并在一起。整个框架通过根据贴现器指数尽量减少新的损失功能来培训,以提高分解准确性和可靠性。该方法是在由1251名患有脑肿瘤的病人组成的BRAT 2021数据库中评估的。定量和定性结果显示,我们的方法超出了工艺的状态,并采用了在深神经网络内整合多信息的有效新想法。