Evolutionary partial differential equations play a crucial role in many areas of science and engineering. Spatial discretization of these equations leads to a system of ordinary differential equations which can then be solved by numerical time integration. Such a system is often of very high dimension, making the simulation very time consuming. One way to reduce the computational cost is to approximate the large system by a low-dimensional model using a model reduction approach. This master thesis deals with structure-preserving model reduction of Hamiltonian systems by using machine learning techniques. We discuss a nonlinear approach based on the construction of an encoder-decoder pair that minimizes the approximation error and satisfies symplectic constraints to guarantee the preservation of the structure inherent in Hamiltonian systems. More specifically, we study an autoencoder network that learns a symplectic encoder-decoder pair. Symplecticity poses some additional difficulties, as we need to ensure this structure in each network layer. Since these symplectic constraints are described by the (symplectic) Stiefel manifold, we use manifold optimization techniques to ensure the symplecticity of the encoder and decoder. A particular challenge is to adapt the ADAM optimizer to the manifold structure. We present a modified ADAM optimizer that works directly on the Stiefel manifold and compare it to the existing approach based on homogeneous spaces. In addition, we propose several modifications to the network and training setup that significantly improve the performance and accuracy of the autoencoder. Finally, we numerically validate the modified optimizer and different learning configurations on two Hamiltonian systems, the 1D wave equation and the sine-Gordon equation, and demonstrate the improved accuracy and computational efficiency of the presented learning algorithms.
翻译:暂无翻译