In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.
翻译:在本文中,我们为不受约束的隐含功能引入了高效的反向推进计划。这些功能通过一套可学习的重量进行平衡,并可能任选依赖某些输入;使其完全适合作为神经网络中可学习的层。我们展示了我们在不同应用方面的计划:(一) 隐含电极法的神经极值,以及(二) 模型预测控制中的系统识别。