We propose deep learning methods for classical Monge's optimal mass transportation problems, where where the distribution constraint is treated as penalty terms defined by the maximum mean discrepancy in the theory of Hilbert space embeddings of probability measures. We prove that the transport maps given by the proposed methods converge to optimal transport maps in the problem with $L^2$ cost. Several numerical experiments validate our methods. In particular, we show that our methods are applicable to large-scale Monge problems.
翻译:暂无翻译