Markov chain Monte Carlo (MCMC) algorithms are based on the construction of a Markov chain with transition probabilities leaving invariant a probability distribution of interest. In this work, we look at these transition probabilities as functions of their invariant distributions, and we develop a notion of derivative in the invariant distribution of a MCMC kernel. We build around this concept a set of tools that we refer to as Markov chain Monte Carlo Calculus. This allows us to compare Markov chains with different invariant distributions within a suitable class via what we refer to as mean value inequalities. We explain how MCMC Calculus provides a natural framework to study algorithms using an approximation of an invariant distribution, and we illustrate this by using the tools developed to prove convergence of interacting and sequential MCMC algorithms. Finally, we discuss how similar ideas can be used in other frameworks.
翻译:暂无翻译