Despite the remarkable results of deep learning in breast cancer image classification, challenges such as data imbalance and interpretability still exist and require cross-domain knowledge and collaboration among medical experts. In this study, we propose a dual-activated lightweight attention ResNet50 module method-based breast cancer classification method that effectively addresses challenges such as data imbalance and interpretability. Our model fuses a pre-trained deep ResNet50 and a lightweight attention mechanism to accomplish classification by embedding an attention module in layer 4 of ResNet50 and adding two fully connected layers. For the fully connected network design, we employ both Leaky ReLU and ReLU activation functions. On medical histopathology datasets, our model outperforms conventional models, visual transformers, and large models in terms of precision, accuracy, recall, F1 score, and GMean. In particular, the model demonstrates significant robustness and broad applicability when dealing with the unbalanced breast cancer dataset. Our model is tested on 40X, 100X, 200X, and 400X images and achieves accuracies of 98.5%, 98.7%, 97.9%, and 94.3%, respectively. Through an in-depth analysis of loss and accuracy, as well as Grad-CAM analysis, we comprehensively assessed the model performance and gained perspective on its training process. In the later stages of training, the validated losses and accuracies change minimally, showing that the model avoids overfitting and exhibits good generalization ability. Overall, this study provides an effective solution for breast cancer image classification with practical applica
翻译:暂无翻译