Convolutional Neural Networks have played a significant role in various medical imaging tasks like classification and segmentation. They provide state-of-the-art performance compared to classical image processing algorithms. However, the major downside of these methods is the high computational complexity, reliance on high-performance hardware like GPUs and the inherent black-box nature of the model. In this paper, we propose quantised stand-alone self-attention based models as an alternative to traditional CNNs. In the proposed class of networks, convolutional layers are replaced with stand-alone self-attention layers, and the network parameters are quantised after training. We experimentally validate the performance of our method on classification and segmentation tasks. We observe a $50-80\%$ reduction in model size, $60-80\%$ lesser number of parameters, $40-85\%$ fewer FLOPs and $65-80\%$ more energy efficiency during inference on CPUs. The code will be available at \href {https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}{https://github.com/Rakshith2597/Quantised-Self-Attentive-Deep-Neural-Network}.
翻译:革命神经网络在诸如分类和分割等各种医学成像任务中发挥了重要作用,它们提供了与古典图像处理算法相比最先进的性能;然而,这些方法的主要缺点是计算复杂程度高,依赖GPU等高性能硬件,以及模型固有的黑盒性质。在本文中,我们提议以独立自留模式作为传统CNN的替代模式。在拟议的网络类别中,革命层被独立自留层所取代,网络参数在培训后进行量化。我们实验性地验证了我们分类和分解任务方法的性能。我们观察到模型规模减少50-80美元,参数减少60-80美元,参数减少40-85美元,参数减少40-85美元,在电源危机期间提高能效65-80美元。代码将可在以下网站查阅:\href {https://github.com/Rakshith2597/Qautrial-Fistial-Attremax-Natistria-stalQ.