Motivated by the computational difficulties incurred by popular deep learning algorithms for the generative modeling of temporal densities, we propose a cheap alternative which requires minimal hyperparameter tuning and scales favorably to high dimensional problems. In particular, we use a projection-based optimal transport solver [Meng et al., 2019] to join successive samples and subsequently use transport splines [Chewi et al., 2020] to interpolate the evolving density. When the sampling frequency is sufficiently high, the optimal maps are close to the identity and are thus computationally efficient to compute. Moreover, the training process is highly parallelizable as all optimal maps are independent and can thus be learned simultaneously. Finally, the approach is based solely on numerical linear algebra rather than minimizing a nonconvex objective function, allowing us to easily analyze and control the algorithm. We present several numerical experiments on both synthetic and real-world datasets to demonstrate the efficiency of our method. In particular, these experiments show that the proposed approach is highly competitive compared with state-of-the-art normalizing flows conditioned on time across a wide range of dimensionalities.
翻译:受流行的深度学习算法对于时间上下文建模的计算困难的启发,我们提出了一种简单、易于扩展和不需要进行过多超参数调整的替代方法。特别地,我们使用一个基于投影的最优输运求解器 [Meng等(2019)] 来连接连续的采样,随后使用运输样条 [Chewi等(2020)] 来插值演化密度。当采样频率足够高时,最优映射接近于单位映射,因此计算效率很高。此外,训练过程高度可并行化,因为所有最优映射都是独立的,因此可以同时学习。最后,该方法仅基于数值线性代数,而不是最小化非凸目标函数,使我们能够轻松地分析和控制算法。我们在合成和真实数据集上展示了几个数值实验,以证明我们提出的方法的高效性。特别地,这些实验表明,所提出的方法在各种维度上都比最先进的条件时间归一化流方法具有竞争力。