Parallel black box optimization consists in estimating the optimum of a function using $\lambda$ parallel evaluations of $f$. Averaging the $\mu$ best individuals among the $\lambda$ evaluations is known to provide better estimates of the optimum of a function than just picking up the best. In continuous domains, this averaging is typically just based on (possibly weighted) arithmetic means. Previous theoretical results were based on quadratic objective functions. In this paper, we extend the results to a wide class of functions, containing three times continuously differentiable functions with unique optimum. We prove formal rate of convergences and show they are indeed better than pure random search asymptotically in $\lambda$. We validate our theoretical findings with experiments on some standard black box functions.
翻译:平行黑盒优化包括使用 $\ lambda$ 平行评估 $f$ 来估计函数的最佳性。 在 $\ lambda$ 评估中, 将 $\ mu$ 的最好的个人比 $\ lambda$ 评估的最好性进行估算, 以更好地估计函数的最佳性, 而不是仅仅采集最佳性。 在连续的域中, 这个平均率通常只是基于( 可能的加权) 算术手段。 先前的理论结果是基于四边形客观功能。 在本文中, 我们将结果推广到一个广泛的功能类别, 包含三倍持续差异的功能, 且具有独特的最佳性。 我们证明正式的趋同率, 并表明它们确实比 $\ lambda$ 的纯随机搜索更好。 我们用一些标准黑盒函数的实验来验证我们的理论结果 。