We consider the following learning problem: Given sample pairs of input and output signals generated by an unknown nonlinear system (which is not assumed to be causal or time-invariant), we wish to find a continuous-time recurrent neural net with hyperbolic tangent activation function that approximately reproduces the underlying i/o behavior with high confidence. Leveraging earlier work concerned with matching output derivatives up to a given finite order, we reformulate the learning problem in familiar system-theoretic language and derive quantitative guarantees on the sup-norm risk of the learned model in terms of the number of neurons, the sample size, the number of derivatives being matched, and the regularity properties of the inputs, the outputs, and the unknown i/o map.
翻译:我们考虑了以下学习问题:鉴于由未知的非线性系统(不假定是因果或时间变化)产生的一对输入和输出信号样本,我们希望找到一个带有双曲正切激活功能的连续的经常性神经网,该神经网大约能以高度自信复制基本的i/o行为。 我们利用早先有关将输出衍生物匹配到一定的有限顺序的工作,用熟悉的系统理论语言重新描述学习问题,并在神经元数量、样本大小、匹配的衍生物数量、投入的正常性、产出和未知的i/o地图方面,对所学模型的超温度风险进行定量保证。