In this paper, we propose a random gradient-free method for optimization in infinite dimensional Hilbert spaces, applicable to functional optimization in diverse settings. Though such problems are often solved through finite-dimensional gradient descent over a parametrization of the functions, such as neural networks, an interesting alternative is to instead perform gradient descent directly in the function space by leveraging its Hilbert space structure, thus enabling provable guarantees and fast convergence. However, infinite-dimensional gradients are often hard to compute in practice, hindering the applicability of such methods. To overcome this limitation, our framework requires only the computation of directional derivatives and a pre-basis for the Hilbert space domain, i.e., a linearly-independent set whose span is dense in the Hilbert space. This fully resolves the tractability issue, as pre-bases are much more easily obtained than full orthonormal bases or reproducing kernels -- which may not even exist -- and individual directional derivatives can be easily computed using forward-mode scalar automatic differentiation. We showcase the use of our method to solve partial differential equations à la physics informed neural networks (PINNs), where it effectively enables provable convergence.
翻译:本文提出了一种适用于无限维希尔伯特空间优化的随机无梯度方法,可广泛应用于各类函数优化问题。尽管此类问题通常通过对函数参数化(如神经网络)进行有限维梯度下降求解,但另一种值得关注的替代方案是直接利用希尔伯特空间结构在函数空间执行梯度下降,从而获得可证明的收敛保证与快速收敛速率。然而,无限维梯度在实际计算中往往难以获取,限制了此类方法的适用性。为突破这一局限,本框架仅需计算方向导数及希尔伯特空间域的一个预基——即该线性独立集的张成在希尔伯特空间中稠密。这完全解决了计算可行性问题,因为预基比完整的正交规范基或再生核(后者甚至可能不存在)更易获得,且单个方向导数可通过前向模式标量自动微分轻松计算。我们通过求解偏微分方程(采用物理信息神经网络PINNs范式)展示了本方法的应用,其有效实现了可证明的收敛性。