Multi-modal learning focuses on training models by equally combining multiple input data modalities during the prediction process. However, this equal combination can be detrimental to the prediction accuracy because different modalities are usually accompanied by varying levels of uncertainty. Using such uncertainty to combine modalities has been studied by a couple of approaches, but with limited success because these approaches are either designed to deal with specific classification or segmentation problems and cannot be easily translated into other tasks, or suffer from numerical instabilities. In this paper, we propose a new Uncertainty-aware Multi-modal Learner that estimates uncertainty by measuring feature density via Cross-modal Random Network Prediction (CRNP). CRNP is designed to require little adaptation to translate between different prediction tasks, while having a stable training process. From a technical point of view, CRNP is the first approach to explore random network prediction to estimate uncertainty and to combine multi-modal data. Experiments on two 3D multi-modal medical image segmentation tasks and three 2D multi-modal computer vision classification tasks show the effectiveness, adaptability and robustness of CRNP. Also, we provide an extensive discussion on different fusion functions and visualization to validate the proposed model.
翻译:多模式学习侧重于培训模式,在预测过程中同样结合多种输入数据模式。然而,这种平等结合会损害预测准确性,因为不同模式通常伴有不同程度的不确定性。使用这种不确定性将模式结合起来,已经由若干方法进行了研究,但成效有限,因为这些方法的目的不是针对具体的分类或分解问题,不能轻易地转化为其他任务,或存在数字不稳定。在本文件中,我们提议一个新的不确定性多模式学习器,通过跨模式随机网络预测(CRNP)测量特征密度来估计不确定性。CRNP的设计是,在具备稳定培训过程的同时,对不同预测任务之间几乎不作任何调整。从技术角度看,CRNP是探索随机网络预测以估计不确定性和将多模式数据结合起来的第一种方法。关于两个3D多模式医学图像分割任务的实验和三个2D多模式计算机视觉分类任务显示了CRNP的有效性、适应性和稳健性。此外,我们还就不同组合功能和视觉化进行广泛讨论,以验证拟议的模型。