We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that queries to the operator (such as running a high-fidelity simulator or physical experiment) are costly, while functional evaluations on the operator's output are inexpensive. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process (GP) posterior. We derive regret bounds and theoretical results connecting neural operators with GPs in infinite-dimensional settings. Experiments benchmark our method against other Bayesian optimization baselines on functional optimization tasks involving partial differential equations of physical systems, demonstrating better sample efficiency and significant performance gains.
翻译:本文提出了一种将汤普森采样扩展到函数空间优化问题的扩展方法,其中目标函数是未知算子输出的已知泛函。我们假设对算子的查询(例如运行高保真度模拟器或物理实验)成本高昂,而对算子输出的泛函评估则成本低廉。我们的算法采用基于神经算子代理的“采样后优化”方法。该策略通过将训练后的神经算子视为高斯过程后验的近似样本,避免了显式的不确定性量化。我们推导了在无限维设置下神经算子与高斯过程关联的遗憾界和理论结果。实验将我们的方法与其他贝叶斯优化基线在涉及物理系统偏微分方程的函数优化任务上进行了基准测试,结果表明我们的方法具有更好的样本效率和显著的性能提升。