Wound image segmentation is a critical component for the clinical diagnosis and in-time treatment of wounds. Recently, deep learning has become the mainstream methodology for wound image segmentation. However, the pre-processing of the wound image, such as the illumination correction, is required before the training phase as the performance can be greatly improved. The correction procedure and the training of deep models are independent of each other, which leads to sub-optimal segmentation performance as the fixed illumination correction may not be suitable for all images. To address aforementioned issues, an end-to-end dual-view segmentation approach was proposed in this paper, by incorporating a learn-able illumination correction module into the deep segmentation models. The parameters of the module can be learned and updated during the training stage automatically, while the dual-view fusion can fully employ the features from both the raw images and the enhanced ones. To demonstrate the effectiveness and robustness of the proposed framework, the extensive experiments are conducted on the benchmark datasets. The encouraging results suggest that our framework can significantly improve the segmentation performance, compared to the state-of-the-art methods.
翻译:创伤的临床诊断和实时治疗的关键组成部分是声像分解。最近,深层学习已成为创伤图像分解的主流方法。然而,在培训阶段之前,需要预先处理伤口图像,例如照明校正,因为其性能可以大大改进。纠正程序和深层模型的培训是互不相干的,这导致分解效果的次优性,因为固定的染色校正可能不适合于所有图像。为了解决上述问题,本文件提议了一种端到端的双视分解方法,在深层分解模型中纳入一个可学习的照明校正模块。模块的参数可以在培训阶段自动学习和更新,而双视集成可以充分利用原始图像和强化图像的特征。为了证明拟议框架的有效性和稳健性,在基准数据集上进行了广泛的实验。令人鼓舞的结果表明,与最新方法相比,我们的框架可以显著改进分解性能。