Stencil computation is one of the most used kernels in a wide variety of scientific applications, ranging from large-scale weather prediction to solving partial differential equations. Stencil computations are characterized by three unique properties: (1) low arithmetic intensity, (2) limited temporal data reuse, and (3) regular and predictable data access pattern. As a result, stencil computations are typically bandwidth-bound workloads, which only experience limited benefits from the deep cache hierarchy of modern CPUs. In this work, we propose Casper, a near-cache accelerator consisting of specialized stencil compute units connected to the last-level cache (LLC) of a traditional CPU. Casper is based on two key ideas: (1) avoiding the cost of moving rarely reused data through the cache hierarchy, and (2) exploiting the regularity of the data accesses and the inherent parallelism of the stencil computation to increase the overall performance. With minimal changes in LLC address decoding logic and data placement, Casper performs stencil computations at the peak bandwidth of the LLC. We show that, by tightly coupling lightweight stencil compute units near to LLC, Casper improves the performance of stencil kernels by 1.65x on average, while reducing the energy consumption by 35% compared to a commercial high-performance multi-core processor. Moreover, Casper provides a 37x improvement in performance-per-area compared to a state-of-the-art GPU.
翻译:------
模板计算是一类最常用于各种科学应用的核心计算之一,涵盖从大规模天气预报到求解偏微分方程的各种领域。模板计算有三个独特的特性:(1)低算术密度,(2)有限的时间数据重用和(3)规则和可预测的数据访问模式。因此,模板计算通常是带宽限制型工作负载,仅从现代CPU的深层缓存层次结构中获得有限的性能提升。在本文中,我们提出了Casper,一种基于近缓存加速器,由专门的模板计算单元连接到传统CPU的最后一级缓存(LLC)。Casper基于两个关键思想:(1)避免将很少重用的数据通过缓存层次结构移动的成本,(2)利用数据访问的规则性和模板计算的内在并行性来增加整体性能。通过最小化LLC地址解码逻辑和数据放置的更改,Casper在LLC的峰值带宽下执行模板计算。我们证明,通过将近缓存轻量级模板计算单元紧密耦合到LLC,Casper平均提高模板核心的性能1.65倍,减少35%的能源消耗,与商业高性能多核处理器相比。此外,Casper比采用最先进GPU的性能-面积比提高了37倍。