Representing a manifold of very high-dimensional data with generative models has been shown to be computationally efficient in practice. However, this requires that the data manifold admits a global parameterization. In order to represent manifolds of arbitrary topology, we propose to learn a mixture model of variational autoencoders. Here, every encoder-decoder pair represents one chart of a manifold. We propose a loss function for maximum likelihood estimation of the model weights and choose an architecture that provides us the analytical expression of the charts and of their inverses. Once the manifold is learned, we use it for solving inverse problems by minimizing a data fidelity term restricted to the learned manifold. To solve the arising minimization problem we propose a Riemannian gradient descent algorithm on the learned manifold. We demonstrate the performance of our method for low-dimensional toy examples as well as for deblurring and electrical impedance tomography on certain image manifolds.
翻译:在实践中,使用生成模型来表示高维数据的流形已被证明具有计算效率。然而,这需要数据流形具有全局参数化。为了表示任意拓扑的流形,我们提出了一种使用变分自编码器的混合模型进行学习的方法。在此方法中,每个编码器-解码器对表示流形的一个图。我们提出了一种最大似然估计模型权重的损失函数,并选择了一种架构,可为我们提供图表及其反函数的解析表达。学习了流形后,我们将其用于通过最小化限制在学习的流形上的数据保真度项来求解逆问题。为了解决产生的最小化问题,我们提出了在学习的流形上的黎曼梯度下降算法。我们展示了我们方法在低维玩具示例以及在某些图像流形上的去模糊和电阻抗层析成像上的性能。