Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an $L^p$-universal approximator for continuous maps under certain conditions. However, the $L^p$-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the $\sup$-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.
翻译:神经普通差异方程式(NODEs)是一个不可忽视的神经网络结构,它具有自由形式Jacobian 和可移动的Jacobian 决定因素估计器的可用性。最近,已部分发现NODs的代表性力量:在一定条件下,它们形成连续地图的通用近似器。然而,$L ⁇ p$-普遍性可能无法保证整个输入域的近似值,因为它可能仍然维持着,即使接近器与输入空间小区域的目标功能大不相同。为了进一步发掘NODs的潜力,我们展示了它们更强大的近似特性,即近似值$\sup$-普遍性,以近似一大等级的二变形主义。它表现为利用一个结构的变形组的理论,其结果补充了现有文献,建立了一套相当大的地图,而NODEs可以近似为更强大的保证。