Sparse regression on a library of candidate features has developed as the prime method to discover the partial differential equation underlying a spatio-temporal data-set. These features consist of higher order derivatives, limiting model discovery to densely sampled data-sets with low noise. Neural network-based approaches circumvent this limit by constructing a surrogate model of the data, but have to date ignored advances in sparse regression algorithms. In this paper we present a modular framework that dynamically determines the sparsity pattern of a deep-learning based surrogate using any sparse regression technique. Using our new approach, we introduce a new constraint on the neural network and show how a different network architecture and sparsity estimator improve model discovery accuracy and convergence on several benchmark examples. Our framework is available at \url{https://github.com/PhIMaL/DeePyMoD}
翻译:候选特征库的微缩回归已发展成为发现部分差异方程式的主要方法。 这些特性包括更高的分级衍生物, 将模型发现限制在密集抽样的、 低噪音的数据集中。 以神经网络为基础的方法通过建立数据替代模型绕过这一限制, 但至今忽略了稀薄回归算法的进展。 在本文中, 我们提出了一个模块框架, 通过任何稀薄回归技术, 动态地决定基于深层学习的代孕的宽度模式。 我们使用我们的新方法, 对神经网络引入了新的限制, 并展示不同的网络架构和spairity 估测器如何提高一些基准示例的模型发现准确性和趋同性。 我们的框架可以在\ url{https://github.com/ PhIMaL/DeePyMoD}