As Convolutional Neural Networks (CNNs) become increasingly prevalent in deep learning applications, numerous algorithms, such as the Winograd algorithm, have been proposed to enhance their efficiency. However, existing implementations of Winograd Convolution based on General Matrix Multiplication (GEMM) exhibit certain limitations: the transformation tasks take up a significant portion of the process, computation efficiency is suboptimal, and a single parallel strategy leads to reduced parallel efficiency for certain layers. In this article, we present a novel fused Winograd Convolution algorithm specifically optimized for the three stages of Winograd Convolution - input and filter transformation, computation, and output transformation - carefully tailored for ARMv8 manycore CPUs. Our method maintains consecutive memory access as much as possible during the transformation stage and integrates data packing into a z-shape customized data layout, which is conducive for our meticulously optimized GEMM micro-kernel using a ping-pong technique. Moreover, we introduce a three-mode parallel strategy that adaptively switches based on the scale of convolutional layers, addressing the shortcomings of current methodologies. By manually optimizing each kernel at the assembly level and thoroughly analyzing the blocking parameters, we significantly reduce transformation time and enhance computational efficiency compared to state-of-the-art libraries. Experimental results demonstrate that our method achieves up to 2.35x and 2.39x speedup for single-thread execution and 1.66x and 2.06x geometric mean speedup for multi-thread execution compared to NCNN and NNPACK on the Kunpeng 920.
翻译:暂无翻译