Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search, as they achieve good predictive performance with little to no manual tuning, naturally handle discrete feature spaces, and are relatively insensitive to outliers in the training data. Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function. To address both points simultaneously, we propose using the kernel interpretation of tree ensembles as a Gaussian Process prior to obtain model variance estimates, and we develop a compatible optimization formulation for the acquisition function. The latter further allows us to seamlessly integrate known constraints to improve sampling efficiency by considering domain-knowledge in engineering settings and modeling search space symmetries, e.g., hierarchical relationships in neural architecture search. Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
翻译:树群可以很好地适用于黑箱优化任务, 如算法调整和神经结构搜索, 因为这些树群的预测性能良好, 很少甚至没有手工调整, 自然处理离散特性空间, 对培训数据外端相对不敏感。 在使用树群进行黑箱优化方面, 两个众所周知的难题是 (一) 有效地量化用于勘探的模型不确定性, 以及 (二) 优化小块常数获取功能。 为了同时解决这两个问题, 我们提议在获得模型差异估计之前先使用树群的内核解释作为高斯进程, 我们为获取功能开发一个兼容的优化配制。 后者进一步使我们能够通过在工程环境中考虑域知识以及建模搜索空间组合, 例如神经结构搜索中的等级关系, 来无缝地整合已知的采样效率。 我们的框架运行, 以及用于对连续/ 差异特性进行不协调的黑箱优化的状态方法, 外形方法则相互竞争, 将混合特性空间和已知输入限制的问题结合起来。