We present a deep learning model for data-driven simulations of random dynamical systems without a distributional assumption. The deep learning model consists of a recurrent neural network, which aims to learn the time marching structure, and a generative adversarial network to learn and sample from the probability distribution of the random dynamical system. Although generative adversarial networks provide a powerful tool to model a complex probability distribution, the training often fails without a proper regularization. Here, we propose a regularization strategy for a generative adversarial network based on consistency conditions for the sequential inference problems. First, the maximum mean discrepancy (MMD) is used to enforce the consistency between conditional and marginal distributions of a stochastic process. Then, the marginal distributions of the multiple-step predictions are regularized by using MMD or from multiple discriminators. The behavior of the proposed model is studied by using three stochastic processes with complex noise structures.
翻译:我们为随机动态系统的数据驱动模拟提供了一个深层次的学习模型,而没有分布式假设。深层学习模型包括一个经常性神经网络,目的是学习时间行进结构,以及一个从随机动态系统的概率分布中学习和抽样的基因对抗网络。尽管基因型对立网络为模拟复杂概率分布提供了强大的工具,但培训往往没有适当规范。在这里,我们提议了一个基于相继推论问题一致性条件的基因化对立网络的正规化战略。首先,最大平均差异(MMD)用于执行有条件分布和边际分布的随机随机动态系统。然后,多步骤预测的边际分布通过使用 MMD 或多重偏差进行正规化。拟议模型的行为通过使用三个具有复杂噪音结构的随机过程进行研究。