This study analyzes the derivative-free loss method to solve a certain class of elliptic PDEs and fluid problems using neural networks. The approach leverages the Feynman-Kac formulation, incorporating stochastic walkers and their averaged values. We investigate how the time interval associated with the Feynman-Kac representation and the walker size influence computational efficiency, trainability, and sampling errors. Our analysis shows that the training loss bias scales proportionally with the time interval and the spatial gradient of the neural network, while being inversely proportional to the walker size. Moreover, we demonstrate that the time interval must be sufficiently long to enable effective training. These results indicate that the walker size can be chosen as small as possible, provided it satisfies the optimal lower bound determined by the time interval. Finally, we present numerical experiments that support our theoretical findings.
翻译:暂无翻译