Skin lesion detection in dermoscopic images is essential in the accurate and early diagnosis of skin cancer by a computerized apparatus. Current skin lesion segmentation approaches show poor performance in challenging circumstances such as indistinct lesion boundaries, low contrast between the lesion and the surrounding area, or heterogeneous background that causes over/under segmentation of the skin lesion. To accurately recognize the lesion from the neighboring regions, we propose a dilated scale-wise feature fusion network based on convolution factorization. Our network is designed to simultaneously extract features at different scales which are systematically fused for better detection. The proposed model has satisfactory accuracy and efficiency. Various experiments for lesion segmentation are performed along with comparisons with the state-of-the-art models. Our proposed model consistently showcases state-of-the-art results.
翻译:在通过计算机设备对皮肤癌进行准确和早期诊断时,在脱温图像中皮肤损伤检测至关重要。目前的皮肤损伤分解方法显示,在诸如分泌分界、分泌和周围区域之间差异低、或导致皮肤损伤分解过/偏差的不同背景等具有挑战性的情况下,其性能表现不佳。为了准确识别来自邻近地区的损伤,我们建议基于卷发因子化的扩大规模-宽度特征聚合网络。我们的网络旨在同时提取不同尺度的功能,这些功能系统结合,以更好地检测。提议的模型具有令人满意的准确性和效率。在与最新模型进行比较的同时,还进行了各种关于分解损伤的实验。我们提议的模型一贯展示最新的结果。