In [1] it is shown that recurrent neural networks (RNNs) can learn - in a metric entropy optimal manner - discrete time, linear time-invariant (LTI) systems. This is effected by comparing the number of bits needed to encode the approximating RNN to the metric entropy of the class of LTI systems under consideration [2, 3]. The purpose of this note is to provide an elementary self-contained proof of the metric entropy results in [2, 3], in the process of which minor mathematical issues appearing in [2, 3] are cleaned up. These corrections also lead to the correction of a constant in a result in [1] (see Remark 2.5).
翻译:[1] [1]中显示,经常性神经网络(RNNs)可以学习 -- -- 以公吨最佳方式 -- -- 离散时间、线性时变(LTI)系统,这是通过比较将相近的RNN编码为[2、3]审议中的LTI系统类的公倍数所需的位数来实现的。本说明的目的是在[2、3]中提供关于公吨结果的基本自足证明,在其中清理[2、3]中出现的微小数学问题。这些更正还导致对常数的校正,结果为[1](见Remark 2.5)。