We consider the problem of learning stabilizable systems governed by nonlinear state equation $h_{t+1}=\phi(h_t,u_t;\theta)+w_t$. Here $\theta$ is the unknown system dynamics, $h_t $ is the state, $u_t$ is the input and $w_t$ is the additive noise vector. We study gradient based algorithms to learn the system dynamics $\theta$ from samples obtained from a single finite trajectory. If the system is run by a stabilizing input policy, we show that temporally-dependent samples can be approximated by i.i.d. samples via a truncation argument by using mixing-time arguments. We then develop new guarantees for the uniform convergence of the gradients of empirical loss. Unlike existing work, our bounds are noise sensitive which allows for learning ground-truth dynamics with high accuracy and small sample complexity. Together, our results facilitate efficient learning of the general nonlinear system under stabilizing policy. We specialize our guarantees to entry-wise nonlinear activations and verify our theory in various numerical experiments
翻译:我们考虑的是学习由非线性状态方程式 $h ⁇ t+1 ⁇ pi(h_t, u_t;\theta)+w_t$管理的可稳定化系统的问题。 美元是未知的系统动态, 美元是州, 美元是州, 美元是投入的美元, 美元是添加性噪声矢量。 我们研究基于梯度的算法, 学习从单一有限轨迹中获得的样本中的系统动态 $\theta$。 如果系统由稳定性输入政策运行, 我们通过混合时间参数来显示, 时间上的样本可以通过 i. d. 标本来比较。 我们然后为实验损失的梯度的统一趋同制定新的保证。 与现有的工作不同, 我们的界限是噪音敏感度, 从而能够以高精度和小的样本复杂性学习地面- 色动。 我们的结果有助于在稳定政策下高效地学习一般的非线性系统。 我们专门将保证用于输入到非线性激活, 并在各种数字实验中校准我们的理论。