Projected kernel calibration is known to be theoretically superior, its loss function is abbreviated as PK loss function. In this work, we prove the uniform convergence of PK loss function and show that (1) when the sample size is large, any local minimum point and local maximum point of the $L_2$ loss between the true process and the computer models is a local minimum point of the PK loss function; (2) all the local minimum values of the PK loss function converge to the same value. These theoretical results imply that it is extremely hard for the projected kernel calibration to identify the global minimum point of the $L_2$ loss which is defined as the optimal value of the calibration parameters. To solve this problem, a frequentist method, called the penalized projected kernel calibration method is proposed. As a frequentist method, the proposed method is proved to be semi-parametric efficient. On the other hand, the proposed method has a natural bayesian version, which allows users to calculate the credible region of the calibration parameters without using a large sample approximation. Through extensive simulation studies and a real-world case study, we show that the proposed calibration can accurately estimate the calibration parameters, and compare favorably to alternative calibration methods regardless of the sample size.
翻译:据知,预测内核校准在理论上是高的,其损失功能是缩缩的,以PK损失功能为PK损失功能。在这项工作中,我们证明PK损失功能的统一趋同,并表明:(1) 当样本规模大时,真实过程与计算机模型之间任何当地最低点和美元$_2美元损失的最大地方点是真实过程与计算机模型之间的一个当地最低点;(2) PK损失函数的所有当地最低值都同值汇合。这些理论结果表明,预测的内核校准极难确定作为校准参数最佳值的$L_2美元损失的全球最低点。为了解决这个问题,我们采用了一种经常使用的方法,称之为受罚的预测内核校准方法。作为一种经常方法,所拟议的方法被证明是半偏差效率。另一方面,拟议的方法有一个天然的刺刺截式版本,用户可以在不使用大样本近度的情况下计算校准参数的可靠区域。通过广泛的模拟研究和现实世界范围内的偏差校准参数,我们可以比较拟议的校准方法。