In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.
翻译:在过去的十年中,随着深度学习,尤其是深度神经网络的快速发展,医学图像分析取得了显著进展。然而,如何有效地利用医学图像中各种组织或器官之间的关系信息仍然是一个非常具有挑战性的问题,目前还没有得到充分的研究。在本论文中,我们提出了两种基于深度关系学习的解决方案来解决这个问题。首先,我们提出了一种上下文感知的完全卷积网络,有效地模拟了特征之间的隐含关系信息,以执行医学图像分割。该网络在Multi Modal Brain Tumor Segmentation 2017(BraTS2017)和Multi Modal Brain Tumor Segmentation 2018(BraTS2018)数据集上实现了最先进的分割结果。随后,我们提出了一种新的分层单应性估计网络,通过学习相邻帧之间的显式空间关系,实现精确的医学图像拼接。我们使用了UCL Fetoscopy Placenta数据集进行实验,我们的分层单应性估计网络在生成健壮而有意义的拼接结果的同时,优于其他最先进的拼接方法。