Continuous-time (CT) models have shown an improved sample efficiency during learning and enable ODE analysis methods for enhanced interpretability compared to discrete-time (DT) models. Even with numerous recent developments, the multifaceted CT state-space model identification problem remains to be solved in full, considering common experimental aspects such as the presence of external inputs, measurement noise, and latent states. This paper presents a novel estimation method that includes these aspects and that is able to obtain state-of-the-art results on multiple benchmarks where a small fully connected neural network describes the CT dynamics. The novel estimation method called the subspace encoder approach ascertains these results by altering the well-known simulation loss to include short subsections instead, by using an encoder function and a state-derivative normalization term to obtain a computationally feasible and stable optimization problem. This encoder function estimates the initial states of each considered subsection. We prove that the existence of the encoder function has the necessary condition of a Lipschitz continuous state-derivative utilizing established properties of ODEs.
翻译:连续时间(CT)模型显示,在学习期间,抽样效率有所提高,并使得与离散时间(DT)模型相比,可提高可解释性的数据交换分析方法得以实现。即使最近出现了许多发展,考虑到外部投入的存在、测量噪音和潜伏状态等常见实验方面,CT-空间模型识别问题仍然有待完全解决。本文件提出了一个新颖的估计方法,其中包括这些方面,并能够在多个基准上获得最新的结果,在这些基准上,一个完全连接的小型神经网络描述CT动态。称为子空间编码器的新估计方法通过改变已知的模拟损失,将短小分节改为包括,使用编码器函数和国家衍生的正常化术语,以获得计算上可行和稳定的优化问题。该编码器函数估计了每个被考虑的分节的初始状态。我们证明,编码器功能的存在具有利用已确立的OD特性持续进行立立状态衍生的必要条件。