The Discrete Fourier Transform (DFT) is essential for various applications ranging from signal processing to convolution and polynomial multiplication. The groundbreaking Fast Fourier Transform (FFT) algorithm reduces DFT time complexity from the naive O(n^2) to O(n log n), and recent works have sought further acceleration through parallel architectures such as GPUs. Unfortunately, accelerators such as GPUs cannot exploit their full computing capabilities as memory access becomes the bottleneck. Therefore, this paper accelerates the FFT algorithm using digital Processing-in-Memory (PIM) architectures that shift computation into the memory by exploiting physical devices capable of storage and logic (e.g., memristors). We propose an O(log n) in-memory FFT algorithm that can also be performed in parallel across multiple arrays for high-throughput batched execution, supporting both fixed-point and floating-point numbers. Through the convolution theorem, we extend this algorithm to O(log n) polynomial multiplication - a fundamental task for applications such as cryptography. We evaluate FourierPIM on a publicly-available cycle-accurate simulator that verifies both correctness and performance, and demonstrate 5-15x throughput and 4-13x energy improvement over the NVIDIA cuFFT library on state-of-the-art GPUs for FFT and polynomial multiplication.
翻译:----
离散傅里叶变换 (DFT) 对于各种应用场景都至关重要,从信号处理到卷积和多项式乘法。开创性的快速傅里叶变换 (FFT) 算法将 DFT 的时间复杂度从朴素的 O(n^2) 降低到 O(n log n),而最近的一些工作则通过诸如 GPU 等并行架构来进一步加速。不幸的是,GPU 等加速器无法充分利用其全面的计算能力,因为访问内存会成为瓶颈。因此,本文通过利用数字化处理内存 (PIM) 架构来加速 FFT 算法,将计算推入内存,利用具有存储和逻辑能力 (例如 memristor) 的物理设备。我们提出了一种 O(log n) 的内存中 FFT 算法,可在多个数组之间并行执行以支持高吞吐量批处理,支持固定点和浮点数。通过卷积定理,我们将此算法扩展到 O(log n) 的多项式乘法——密码学等应用程序的基本任务。我们使用一个公开的周期精确的仿真器对 FourierPIM 进行评估,以验证正确性和性能,并展示了相对于最先进的 GPU 上的 NVIDIA cuFFT 库来说在 FFT 和多项式乘法上 5-15 倍的吞吐量和 4-13 倍的能量改进。