Projected kernel calibration is a newly proposed frequentist calibration method, which is asymptotic normal and semi-parametric. Its loss function is usually referred to as the PK loss function. In this work, we prove the uniform convergence of PK loss function and show that (1) when the sample size is large, any local minimum point and local maximum point of the $L_2$ loss between the true process and the computer model is a local minimum point of the PK loss function; (2) all the local minima of the PK loss function converge to the same value. These theoretical results imply that it is extremely hard for the projected kernel calibration to identify the global minimum of the $L_2$ loss, i.e. the optimal value of the calibration parameters. To solve this problem, a frequentist method which we term penalized projected kernel calibration method is suggested and analyzed in detail. We prove that the proposed method is as efficient as the projected kernel calibration method. Through an extensive set of numerical simulations, and a real-world case study, we show that the proposed calibration method can accurately estimate the calibration parameters. We also show that its performance compares favorably to other calibration methods regardless of the sample size.
翻译:预测的内核校准是一种新提出的中位校准方法,这是非现成的正常和半参数,其损失功能通常称为PK损失函数。在这项工作中,我们证明PK损失功能的统一趋同,并表明:(1) 当样本大小大时,真实过程和计算机模型之间任何当地最低点和美元2美元损失的当地最大点都是PK损失功能的一个当地最低点;(2) PK损失函数的所有当地微量值都汇合到同一值。这些理论结果表明,预测的内核校准功能很难确定$L_2美元损失的全球最低值,即校准参数的最佳值。为解决这一问题,我们建议并详细分析一种常用的方法,即对预测的内核校准方法加以处罚。我们证明,拟议的方法与预测的内核校准方法一样有效。通过一套广泛的数字模拟和真实世界案例研究,我们表明,拟议的校准方法能够准确地估计其他校准度参数的精确度,而不论是否精确校准。