Convolutional Neural Networks (CNNs), one of the most representative algorithms of deep learning, are widely used in various artificial intelligence applications. Convolution operations often take most of the computational overhead of CNNs. The FFT-based algorithm can improve the efficiency of convolution by reducing its algorithm complexity, there are a lot of works about the high-performance implementation of FFT-based convolution on many-core CPUs. However, there is no optimization for the non-uniform memory access (NUMA) characteristics in many-core CPUs. In this paper, we present a NUMA-aware FFT-based convolution implementation on ARMv8 many-core CPUs with NUMA architectures. The implementation can reduce a number of remote memory access through the data reordering of FFT transformations and the three-level parallelization of the complex matrix multiplication. The experiment results on a ARMv8 many-core CPU with NUMA architectures demonstrate that our NUMA-aware implementation has much better performance than the state-of-the-art work in most cases.
翻译:以FFT为基础的算法可以降低其算法复杂性,从而提高演化效率,许多关于高性能地实施FFFT在多核心CPU上的演进的作品很多,然而,在许多核心CPU中,非统一内存(NUMA)特性没有优化。在本文中,我们介绍了以NUMA为根据的FFFFT在ARMV8多核心CPU与NUMA结构上的演进实施情况。通过重新安排FFT转换的数据和复杂矩阵的三级平行倍化,实施该系统可以减少远程内存访问次数。关于ARMV8与NUMA结构的实验结果显示,在多数情况下,我们的NUMA认知实施比国家工艺工作业绩要好得多。