Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations. Results indicate that disentangled VAEs adapt better to domain parameters spaces that were not present in the training data. At the same time, disentanglement can improve the long-term and out-of-distribution predictions of state-of-the-art models in video sequences.
翻译:深神经网络对动态系统预测越来越感兴趣,但超出分布的通用和长期稳定性仍具有挑战性。 在这项工作中,我们将动态系统的域参数视为数据生成过程变化的因素。 通过利用受监督的分解和因果因子化的想法,我们的目标是将域参数与变异模型潜在空间的动态分开。 在我们的实验中,我们在阶段空间和视频序列中模拟动态,并进行严格的OOOD评估。结果显示,分解的 VAEs 更好地适应培训数据中不存在的域参数空间。 同时,分解可以改善视频序列中最先进的模型的长期和流出预测。