A classic approach for solving differential equations with neural networks builds upon neural forms, in which a cost function can be constructed directly using the differential equation with a discretisation of the solution domain. Making use of neural forms for time-dependent differential equations, one can apply the recently developed method of domain fragmentation. That is, the domain may be split into several subdomains, on which the optimisation problem is solved. In classic adaptive numerical methods for solving differential equations, the mesh as well as the domain may be refined or decomposed, respectively, in order to improve accuracy. Also the degree of approximation accuracy may be adapted. It would be desirable to transfer such important and successful strategies to the field of neural network based solutions. In the present work, we propose a novel adaptive neural approach to meet this aim for solving time-dependent problems. To this end, each subdomain is reduced in size until the optimisation is resolved up to a predefined training accuracy. In addition, while the neural networks employed are by default small, the number of neurons may also be adjusted in an adaptive way. We introduce conditions to automatically confirm the solution reliability and optimise computational parameters whenever it is necessary. We provide results for three carefully chosen example initial value problems and illustrate important properties of the method alongside.
翻译:解决神经网络差异方程式的经典方法以神经形式为基础,其中成本函数可以直接使用差异方程式构建,并分解解决方案域。利用神经形式为基于时间的差别方程式,可以应用最近开发的域分割法。也就是说,域可分为若干次域,从而解决优化问题。在解决差异方程式的典型适应性数字方法中,可以分别改进或拆解网目和域域,以提高准确性。还可以调整近似准确性的程度。将这种重要和成功的战略转移到基于时间的神经网络解决方案领域是可取的。在目前的工作中,我们提出一种新的适应性神经方法,以实现这一目标,解决依赖时间的问题。为此,每个子域的大小都缩小,直到选择性方程式的解决达到预先界定的培训准确性。此外,虽然使用的神经网络是默认的,但神经网络的数量也可以以适应性方式加以调整。我们引入了必要的条件,只要选择了解决方案的初始值和选择方法,我们就会自动确认其重要值。