With an aim to analyse the performance of Markov chain Monte Carlo (MCMC) methods, in our recent work we derive a large deviation principle (LDP) for the empirical measures of Metropolis-Hastings (MH) chains on a continuous state space. One of the (sufficient) assumptions for the LDP involves the existence of a particular type of Lyapunov function, and it was left as an open question whether or not such a function exists for specific choices of MH samplers. In this paper we analyse the properties of such Lyapunov functions and investigate their existence for some of the most popular choices of MCMC samplers built on MH dynamics: Independent Metropolis Hastings, Random Walk Metropolis, and the Metropolis-adjusted Langevin algorithm. We establish under what conditions such a Lyapunov function exists, and from this obtain LDPs for some instances of the MCMC algorithms under consideration. To the best of our knowledge, these are the first large deviation results for empirical measures associated with Metropolis-Hastings chains for specific choices of proposal and target distributions.
翻译:暂无翻译