The technique of modifying the geometry of a problem from Euclidean to Hessian metric has proved to be quite effective in optimization, and has been the subject of study for sampling. The Mirror Langevin Diffusion (MLD) is a sampling analogue of mirror flow in continuous time, and it has nice convergence properties under log-Sobolev or Poincare inequalities relative to the Hessian metric, as shown by Chewi et al. (2020). In discrete time, a simple discretization of MLD is the Mirror Langevin Algorithm (MLA) studied by Zhang et al. (2020), who showed a biased convergence bound with a non-vanishing bias term (does not go to zero as step size goes to zero). This raised the question of whether we need a better analysis or a better discretization to achieve a vanishing bias. Here we study the basic Mirror Langevin Algorithm and show it indeed has a vanishing bias. We apply mean-square analysis based on Li et al. (2019) and Li et al. (2021) to show the mixing time bound for MLA under the modified self-concordance condition introduced by Zhang et al. (2020).
翻译:在切维等人(2020年)所显示的,将一个问题从欧几里得量改为赫森度的几何方法在优化方面证明相当有效,并且一直是抽样研究的主题。镜像Langevin Difult(MLD)是连续时间镜镜流的抽样模拟模拟,在日志-Sobolev或Poincare的不平等之下,它比赫森度值(Chewi等人(202020年)所显示的)具有良好的趋同性。在离散的时间里,MLLLD的简单分解是张等人(202020年)所研究的镜像Langevin Algorithm(MLA),他显示有偏差的趋同,并带有非消散的偏差性偏差(步数为零,不等于零),这就提出了我们是否需要更好的分析或更分解来消除偏差的问题。在这里,我们研究基本镜像Langevin Algorithm(Langevin Algorithm),并表明它确实有失正失偏差。我们根据Li等人(2019年)和Li等人(2021年)和Liet al等人(20年)进行平均分析,以显示ZMAL)进行自我修正的自我调整。