The Worldwide LHC Computing Grid (WLCG) provides the robust computing infrastructure essential for the LHC experiments by integrating global computing resources into a cohesive entity. Simulations of different compute models present a feasible approach for evaluating future adaptations that are able to cope with future increased demands. However, running these simulations incurs a trade-off between accuracy and scalability. For example, while the simulator DCSim can provide accurate results, it falls short on scaling with the size of the simulated platform. Using Generative Machine Learning as a surrogate presents a candidate for overcoming this challenge. In this work, we evaluate the usage of three different Machine Learning models for the simulation of distributed computing systems and assess their ability to generalize to unseen situations. We show that those models can predict central observables derived from execution traces of compute jobs with approximate accuracy but with orders of magnitude faster execution times. Furthermore, we identify potentials for improving the predictions towards better accuracy and generalizability.
翻译:全球大型强子对撞机计算网格(WLCG)通过整合全球计算资源形成一个统一整体,为大型强子对撞机实验提供了至关重要的稳健计算基础设施。对不同计算模型进行仿真是评估未来适应性调整的一种可行方法,使其能够应对日益增长的需求。然而,运行这些仿真需要在准确性与可扩展性之间进行权衡。例如,虽然仿真器DCSim能够提供精确结果,但在模拟平台规模扩展方面存在不足。利用生成式机器学习作为代理模型,为克服这一挑战提供了一种可能方案。在本工作中,我们评估了三种不同机器学习模型在分布式计算系统仿真中的应用,并考察了它们对未见场景的泛化能力。研究表明,这些模型能够以近似精度预测从计算作业执行轨迹中提取的核心观测指标,同时获得数量级级别的执行速度提升。此外,我们指出了通过改进预测精度和泛化能力来提升模型性能的潜在方向。