Today's graphics processing unit (GPU) applications produce vast volumes of data, which are challenging to store and transfer efficiently. Thus, data compression is becoming a critical technique to mitigate the storage burden and communication cost. LZSS is the core algorithm in many widely used compressors, such as Deflate. However, existing GPU-based LZSS compressors suffer from low throughput due to the sequential nature of the LZSS algorithm. Moreover, many GPU applications produce multi-byte data (e.g., int16/int32 index, floating-point numbers), while the current LZSS compression only takes single-byte data as input. To this end, in this work, we propose GPULZ, a highly efficient LZSS compression on modern GPUs for multi-byte data. The contribution of our work is fourfold: First, we perform an in-depth analysis of existing LZ compressors for GPUs and investigate their main issues. Then, we propose two main algorithm-level optimizations. Specifically, we (1) change prefix sum from one pass to two passes and fuse multiple kernels to reduce data movement between shared memory and global memory, and (2) optimize existing pattern-matching approach for multi-byte symbols to reduce computation complexity and explore longer repeated patterns. Third, we perform architectural performance optimizations, such as maximizing shared memory utilization by adapting data partitions to different GPU architectures. Finally, we evaluate GPULZ on six datasets of various types with NVIDIA A100 and A4000 GPUs. Results show that GPULZ achieves up to 272.1X speedup on A4000 and up to 1.4X higher compression ratio compared to state-of-the-art solutions.
翻译:摘要:今天的图形处理单元(GPU)应用程序产生了大量数据,这些数据很难进行高效的存储和传输。因此,数据压缩成为减轻存储负担和通信成本的关键技术。LZSS是许多广泛使用的压缩程序(如Deflate)中的核心算法。但是,现有基于GPU的LZSS压缩器由于LZSS算法的顺序性而导致吞吐量较低。此外,许多GPU应用程序产生多字节数据(例如int16/int32索引、浮点数),而当前的LZSS压缩仅采用单字节数据作为输入。为此,在本文中,我们提出了一种高效的现代GPU上的LZSS压缩算法——GPULZ,用于多字节数据。我们的工作贡献有四个方面:首先,我们对现有GPU上的LZ压缩器进行深入分析并调查其主要问题。然后,我们提出了两个主要的算法级优化。具体来说,我们(1)将前缀和从一次遍历改为两次遍历,并融合多个内核以减少共享内存和全局内存之间的数据移动,以及(2)优化现有的多字节符号模式匹配方法以降低计算复杂度并探索更长的重复模式。第三,我们执行了架构性能优化,例如通过将数据分区适应不同的GPU架构来最大化共享内存利用率。最后,我们使用NVIDIA A100和A4000 GPU对六种不同类型的数据集进行了评估。结果表明,与最先进的解决方案相比,GPULZ在A4000上实现了高达272.1倍的加速,压缩比最高可达到1.4倍。