Evaluating the Expected Information Gain (EIG) is a critical task in many areas of computational science and statistics, necessitating the approximation of nested integrals. Available techniques for this problem based on Quasi-Monte Carlo (QMC) methods have primarily focused on enhancing the efficiency of the inner integral approximation. In this work, we introduce a novel approach that extends the scope of these efforts to address inner and outer expectations simultaneously. Leveraging the principles of Owen's scrambling, we develop a randomized quasi-Monte Carlo (RQMC) method that improves the approximation of nested integrals. We also indicate how to combine this methodology with Importance Sampling to address a measure concentration arising in the inner integral. Our RQMC method capitalizes on the unique structure of nested expectations to offer a more efficient approximation mechanism. By incorporating Owen's scrambling techniques, we handle integrands exhibiting infinite variation in the Hardy-Krause (HK) sense, paving the way for theoretically sound error estimates. We derive asymptotic error bounds for the bias and variance of our estimator. In addition, we provide nearly optimal sample sizes for the inner and outer RQMC approximations, which are helpful for the actual numerical implementations. We verify the quality of our estimator through numerical experiments in the context of Bayesian optimal experimental design. Specifically, we compare the computational efficiency of our RQMC method against standard nested Monte Carlo integration across two case studies: one in thermo-mechanics and the other in pharmacokinetics. These examples highlight our approach's computational savings and enhanced applicability, showcasing the advantages of estimating the Expected Information Gain with greater efficiency and reduced computational cost.
翻译:暂无翻译