We study the uniform-in-time propagation of chaos for mean field Langevin dynamics with convex mean field potenital. Convergences in both Wasserstein-$2$ distance and relative entropy are established. We do not require the mean field potenital functional to bear either small mean field interaction or displacement convexity, which are common constraints in the literature. In particular, it allows us to study the efficiency of the noisy gradient descent algorithm for training two-layer neural networks.
翻译:我们研究的是中流的Langevin动态的混乱在时间上统一传播,与中流的Langevin动态以平流的平流平流平流。 瓦塞斯坦- $2 的距离和相对的entropy的趋同已经建立。 我们并不要求中流的野外陶艺功能承担小规模的中流实地互动或流离失所的凝结,这些是文献中常见的制约因素。 特别是,它使我们能够研究噪音梯度下行算法在培训两层神经网络方面的效率。