(Conditional) Generative Adversarial Networks (GANs) have found great success in recent years, due to their ability to approximate (conditional) distributions over extremely high dimensional spaces. However, they are highly unstable and computationally expensive to train, especially in the time series setting. Recently, it has been proposed the use of a key object in rough path theory, called the signature of a path, which is able to convert the min-max formulation given by the (conditional) GAN framework into a classical minimization problem. However, this method is extremely expensive in terms of memory cost, sometimes even becoming prohibitive. To overcome this, we propose the use of \textit{Conditional Neural Stochastic Differential Equations}, which have a constant memory cost as a function of depth, being more memory efficient than traditional deep learning architectures. We empirically test that this proposed model is more efficient than other classical approaches, both in terms of memory cost and computational time, and that it usually outperforms them in terms of performance.
翻译:近年来,由于能够将(有条件)GAN框架提供的微轴配方转换成一个传统最小化问题,近年来,GANs(GANs)取得了巨大成功,这是因为它们有能力在极高的维度空间进行近似(有条件)分布。然而,它们非常不稳定,在计算上非常昂贵,训练费用很高,特别是在时间序列设置中。最近,有人提议在粗路径理论中使用一个关键对象,称为路径的签名,能够将(有条件)GAN框架提供的微轴配方转换成一个传统最小化问题。然而,这种方法在记忆成本方面费用极其昂贵,有时甚至变得令人望而却步。为了克服这一点,我们提议使用具有持续记忆成本的\textit{Conditional Contical Storchatic EqualationsQqualations,作为深度功能的函数,这种记忆效率高于传统的深层次学习结构。我们从经验上测试这一拟议模型比其他古典方法更有效,无论是记忆成本还是计算时间,而且通常在性表现上都超过这些模型。