Zero-variance control variates (ZV-CV) are a post-processing method to reduce the variance of Monte Carlo estimators of expectations using the derivatives of the log target. Once the derivatives are available, the only additional computational effort lies in solving a linear regression problem. Significant variance reductions have been achieved with this method in low dimensional examples, but the number of covariates in the regression rapidly increases with the dimension of the target. In this paper, we present compelling empirical evidence that the use of penalized regression techniques in the selection of high-dimensional control variates provides performance gains over the classical least squares method. Another type of regularization based on using subsets of derivatives, or a priori regularization as we refer to it in this paper, is also proposed to reduce computational and storage requirements. Several examples showing the utility and limitations of regularized ZV-CV for Bayesian inference are given. The methods proposed in this paper are accessible through the R package ZVCV.
翻译:零差控制变异(ZV-CV)是一种后处理方法,用于减少Monte Carlo对使用日志目标衍生物的预期估计值的差异。一旦有了衍生物,唯一的额外计算努力就在于解决线性回归问题。在低维例子中,这种方法的差幅显著减少,但回归中的共差数随着目标的尺寸而迅速增加。在本文件中,我们提出了令人信服的经验证据,证明在选择高维控制变异物时使用惩罚性回归技术可提高传统最低方形方法的性能。另外一种基于使用衍生物子集的正规化,或本文中我们提到的事先正规化,也提议减少计算和储存要求。提供了几个例子,表明正常化的ZV-CV对巴耶斯引力的效用和局限性。本文中建议的方法可通过R 包 ZVCV查阅。