Gradient flows are a powerful tool for optimizing functionals in general metric spaces, including the space of probabilities endowed with the Wasserstein metric. A typical approach to solving this optimization problem relies on its connection to the dynamic formulation of optimal transport and the celebrated Jordan-Kinderlehrer-Otto (JKO) scheme. However, this formulation involves optimization over convex functions, which is challenging, especially in high dimensions. In this work, we propose an approach that relies on the recently introduced input-convex neural networks (ICNN) to parameterize the space of convex functions in order to approximate the JKO scheme, as well as in designing functionals over measures that enjoy convergence guarantees. We derive a computationally efficient implementation of this JKO-ICNN framework and use various experiments to demonstrate its feasibility and validity in approximating solutions of low-dimensional partial differential equations with known solutions. We also explore the use of our JKO-ICNN approach in high dimensions with an experiment in controlled generation for molecular discovery.
翻译:渐变流是优化一般计量空间功能的有力工具,包括瓦塞斯坦度量度标准所赋予的概率空间。解决这一优化问题的典型方法取决于它与动态的最佳运输方式和著名的约旦-Kinderle Heir-Ottto(JKO)计划之间的联系。然而,这一提法涉及优化 convex 功能,这具有挑战性,特别是在高维度方面。在这项工作中,我们建议采用一种方法,依靠最近引进的投入-凝固神经网络(ICNN),将 convex 功能的空间参数化,以接近JKO 计划,以及设计功能,使其适用于享有趋同保证的措施。我们从计算上高效率地执行JKO-ICNN框架,并使用各种实验来证明其可行性和有效性,以近似地解决低维部分差异方和已知解决办法。我们还探索在高维度使用我们的JKO-ICNN方法,在受控生成分子发现方面进行实验。