The way developers implement their algorithms and how these implementations behave on modern CPUs are governed by the design and organization of these. The vectorization units (SIMD) are among the few CPUs' parts that can and must be explicitly controlled. In the HPC community, the x86 CPUs and their vectorization instruction sets were de-facto the standard for decades. Each new release of an instruction set was usually a doubling of the vector length coupled with new operations. Each generation was pushing for adapting and improving previous implementations. The release of the ARM scalable vector extension (SVE) changed things radically for several reasons. First, we expect ARM processors to equip many supercomputers in the next years. Second, SVE's interface is different in several aspects from the x86 extensions as it provides different instructions, uses a predicate to control most operations, and has a vector size that is only known at execution time. Therefore, using SVE opens new challenges on how to adapt algorithms including the ones that are already well-optimized on x86. In this paper, we port a hybrid sort based on the well-known Quicksort and Bitonic-sort algorithms. We use a Bitonic sort to process small partitions/arrays and a vectorized partitioning implementation to divide the partitions. We explain how we use the predicates and how we manage the non-static vector size. We explain how we efficiently implement the sorting kernels. Our approach only needs an array of O(log N) for the recursive calls in the partitioning phase, both in the sequential and in the parallel case. We test the performance of our approach on a modern ARMv8.2 and assess the different layers of our implementation by sorting/partitioning integers, double floating-point numbers, and key/value pairs of integers. Our approach is faster than the GNU C++ sort algorithm by a speedup factor of 4 on average.
翻译:开发者如何在现代 CPU 上执行他们的算法和这些执行方式如何, 由这些系统的设计和组织来管理。 矢量化单位( SIMD) 是少数可以且必须明确控制的 CPU 部件之一。 在 HPC 社区中, x86 CPU 及其矢量化教学组在几十年中都与标准脱形。 每套新发布一套指令通常是将矢量长度翻倍, 再加上新的操作。 每一代都在推动调整和改进先前的执行。 释放可缩放的矢量扩展( SVE) 使事情发生急剧变化, 原因有几种。 首先, 我们期待 ARM 处理器在今后几年里装备许多超级计算机。 第二, SVE 接口在几个方面不同方面与 x86 的扩展相异, 它提供不同的指示, 使用一个顶端值 控制大多数操作的矢量值, 并且只有执行时间才知道。 因此, 使用 SVE 的 直位化 方法如何调整算算算算算算法,, 包括已经在 x86 已经相当精度上已经精度的矢量化的矢量化的矢量化的矢量化的矢量/ 。 我们的矢量/ 运行中, 我们的矢量/ 将一个非混合的矢量化的矢量化的矢量级的运行 运行 运行 如何在 我们的运行中, 如何在我们对一个直径的运行中, 我们的矢量/ 运行中, 运行中, 运行中如何在我们的直路路路路路路路路路路路路段到一个我们的矢量化的运行到一个不同的矢量级的运行到一个我们如何在我们如何在我们如何在我们如何在我们对等的 。