In clinical applications, neural networks must focus on and highlight the most important parts of an input image. Soft-Attention mechanism enables a neural network toachieve this goal. This paper investigates the effectiveness of Soft-Attention in deep neural architectures. The central aim of Soft-Attention is to boost the value of important features and suppress the noise-inducing features. We compare the performance of VGG, ResNet, InceptionResNetv2 and DenseNet architectures with and without the Soft-Attention mechanism, while classifying skin lesions. The original network when coupled with Soft-Attention outperforms the baseline[16] by 4.7% while achieving a precision of 93.7% on HAM10000 dataset [25]. Additionally, Soft-Attention coupling improves the sensitivity score by 3.8% compared to baseline[31] and achieves 91.6% on ISIC-2017 dataset [2]. The code is publicly available at github.
翻译:在临床应用中,神经网络必须关注和突出输入图像中最重要的部分。 软- 注意机制能够使神经网络达到这个目标。 本文调查了深神经结构中软- 注意的效果。 软- 注意的中心目的是提高重要特征的价值和抑制噪音诱导特征。 我们比较VGG、 ResNet、 InvicionResNetv2 和 DenseNet 结构的性能与软- 注意机制的性能和不使用软- 注意机制的性能,同时对皮肤损伤进行分类。 与软- 注意相结合的原始网络将基线[16] 超过4.7%,同时在HAM1000数据集上达到93.7%的精确度[25]。 此外, 软- 注意组合提高了3.8%的灵敏度,与基线相比[31], 并在ISIC-2017数据集上达到91.6% [2]。 该代码在Guthubub公开提供。