We explore an online reinforcement learning (RL) paradigm to dynamically optimize parallel particle tracing performance in distributed-memory systems. Our method combines three novel components: (1) a work donation algorithm, (2) a high-order workload estimation model, and (3) a communication cost model. First, we design an RL-based work donation algorithm. Our algorithm monitors workloads of processes and creates RL agents to donate data blocks and particles from high-workload processes to low-workload processes to minimize program execution time. The agents learn the donation strategy on the fly based on reward and cost functions designed to consider processes' workload changes and data transfer costs of donation actions. Second, we propose a workload estimation model, helping RL agents estimate the workload distribution of processes in future computations. Third, we design a communication cost model that considers both block and particle data exchange costs, helping RL agents make effective decisions with minimized communication costs. We demonstrate that our algorithm adapts to different flow behaviors in large-scale fluid dynamics, ocean, and weather simulation data. Our algorithm improves parallel particle tracing performance in terms of parallel efficiency, load balance, and costs of I/O and communication for evaluations with up to 16,384 processors.
翻译:我们探索了一种在线强化学习模式(RL),以动态优化分布式模拟系统中平行粒子追踪绩效。我们的方法包括三个新组成部分:(1) 工作捐赠算法,(2) 高阶工作量估计模型,(3) 通信成本模型。首先,我们设计了基于RL的工作捐赠算法。我们的算法监测流程工作量,并创建RL代理商将高工作量流程的数据块和粒子捐赠到低工作量流程,以最大限度地减少程序执行时间。代理商根据奖励和成本功能学习了捐赠战略,这些功能旨在考虑流程工作量的变化和捐赠行动的数据传输成本。第二,我们提出了一个工作量估算模型,帮助RL代理商估算未来计算流程的工作量分配。第三,我们设计了一个通信成本模型,既考虑区块和粒子数据交换成本,又帮助RL代理商以最小通信成本的方式做出有效决定。我们证明我们的算法适应了大规模液体动态、海洋和天气模拟数据中不同的流动行为。我们的算法改进了平行粒子追踪绩效,包括平行效率、负载平衡以及I/O和通信流程到16的评估成本。