Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into Neural Networks (NN), such as Neural Ordinary Differential Equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach the constrained optimization problem is rewritten as two unconstrained sub-problems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance. Thus, ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures.
翻译:暂无翻译