Bayesian optimization is a form of sequential design: idealize input-output relationships with a suitably flexible nonlinear regression model; fit to data from an initial experimental campaign; devise and optimize a criterion for selecting the next experimental condition(s) under the fitted model (e.g., via predictive equations) to target outcomes of interest (say minima); repeat after acquiring output under those conditions and updating the fit. In many situations this "inner optimization" over the new-data acquisition criterion is cumbersome because it is non-convex/highly multi-modal, may be non-differentiable, or may otherwise thwart numerical optimizers, especially when inference requires Monte Carlo. In such cases it is not uncommon to replace continuous search with a discrete one over random candidates. Here we propose using candidates based on a Delaunay triangulation of the existing input design. In addition to detailing construction of these "tricands", based on a simple wrapper around a conventional convex hull library, we promote several advantages based on properties of the geometric criterion involved. We then demonstrate empirically how tricands can lead to better Bayesian optimization performance compared to both numerically optimized acquisitions and random candidate-based alternatives on benchmark problems.
翻译:Bayesian优化是一种顺序设计形式:将输入-输出关系与适当灵活的非线性回归模型理想化;适合初始实验运动的数据;适合初始实验运动的数据;设计和优化一个标准,用于选择在符合适用模型(例如,通过预测方程)下下一个实验条件,以达到感兴趣的结果(例如,微型方程)为目标(例如,微型方程);在根据这些条件获得输出并更新适应性之后重复;在许多情况下,新数据获取标准的“内在优化”是累赘的,因为它是非连接/高度多式的多式的,可能是不可区分的,或者可能阻碍数字优化,特别是在推断需要蒙特卡洛的情况下。在这种情况下,用离散的随机候选人取代连续搜索并不常见。我们在这里提议根据现有投入设计Delaunay三角图使用候选人。除了详细描述这些“母体优化”的构造外,我们还根据一个常规的 convex船体库库库库库的特性,促进若干优势,可能无法区分,或者可能妨碍数字性优化的优化,特别是在推断需要蒙特卡洛的情况下。我们随后从实验性的角度展示如何将随机性最佳的替代方法转化为候选人的升级。