Deep neural network (DNN) architectures are constructed that are the exact equivalent of explicit Runge-Kutta schemes for numerical time integration. The network weights and biases are given, i.e., no training is needed. In this way, the only task left for physics-based integrators is the DNN approximation of the right-hand side. This allows to clearly delineate the approximation estimates for right-hand side errors and time integration errors. The architecture required for the integration of a simple mass-damper-stiffness case is included as an example.
翻译:深神经网络(DNN) 结构的构建完全相当于用于数字时间整合的明确的龙格-库塔计划。 网络的权重和偏差被给出, 即不需要培训。 这样, 基于物理的集成者唯一的任务就是右侧的 DNN 近似值。 这样可以清晰地描述右侧错误和时间整合错误的近似估计值。 包含一个简单质量放大器集成案例所需的结构作为示例 。