PIM architectures aim to reduce data transfer costs between processors and memory by integrating processing units within memory layers. Prior PIM architectures have shown potential to improve energy efficiency and performance. However, such advantages rely on data proximity to the processing units performing computations. Data movement overheads can degrade PIM's performance and energy efficiency due to the need to move data between a processing unit and a distant memory location. %they face challenges due to the overhead of transferring data from remote memory locations to processing units inside memory for computation. In this paper, we demonstrate that a large fraction of PIM's latency per memory request is attributed to data transfers and queuing delays from remote memory accesses. To improve PIM's data locality, we propose DL-PIM, a novel architecture that dynamically detects the overhead of data movement, and proactively moves data to a reserved area in the local memory of the requesting processing unit. DL-PIM uses a distributed address-indirection hardware lookup table to redirect traffic to the current data location. We propose DL-PIM implementations on two 3D stacked memories: HMC and HBM. While some workloads benefit from DL-PIM, others are negatively impacted by the additional latency due to indirection accesses. Therefore, we propose an adaptive mechanism that assesses the cost and benefit of indirection and dynamically enables or disables it to prevent degrading workloads that suffer from indirection. Overall, DL-PIM reduces the average memory latency per request by 54% in HMC and 50% in HBM which resulted in performance improvement of 15% for workloads with substantial data reuse in HMC and 5% in HBM. For all representative workloads, DL-PIM achieved a 6% speedup in HMC and a 3% speedup in HBM, showing that DL-PIM enhances data locality and overall system performance.
翻译:暂无翻译