The unknown parameters of simulation models often need to be calibrated using observed data. When simulation models are expensive, calibration is usually carried out with an emulator. The effectiveness of the calibration process can be significantly improved by using a sequential selection of parameters to build an emulator. The expansion of parallel computing environments--from multicore personal computers to many-node servers to large-scale cloud computing environments--can lead to further calibration efficiency gains by allowing for the evaluation of the simulation model at a batch of parameters in parallel in a sequential design. However, understanding the performance implications of different sequential approaches in parallel computing environments introduces new complexities since the rate of the speed-up is affected by many factors, such as the run time of a simulation model and the variability in the run time. This work proposes a new performance model to understand and benchmark the performance of different sequential procedures for the calibration of simulation models in parallel environments. We provide metrics and a suite of techniques for visualizing the numerical experiment results and demonstrate these with a novel sequential procedure. The proposed performance model, as well as the new sequential procedure and other state-of-art techniques, are implemented in the open-source Python software package Parallel Uncertainty Quantification (PUQ), which allows users to run a simulation model in parallel.
翻译:暂无翻译