Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search, as they achieve good predictive performance with little or no manual tuning, naturally handle discrete feature spaces, and are relatively insensitive to outliers in the training data. Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function. To address both points simultaneously, we propose using the kernel interpretation of tree ensembles as a Gaussian Process prior to obtain model variance estimates, and we develop a compatible optimization formulation for the acquisition function. The latter further allows us to seamlessly integrate known constraints to improve sampling efficiency by considering domain-knowledge in engineering settings and modeling search space symmetries, e.g., hierarchical relationships in neural architecture search. Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
翻译:树群可以很好地适用于黑箱优化任务, 如算法调整和神经结构搜索, 因为它们在很少或没有手工调整的情况下实现良好的预测性能, 自然处理离散的特性空间, 对培训数据中的外层相对不敏感。 在使用树群进行黑箱优化方面, 两个众所周知的难题是 (一) 有效地量化用于勘探的模型不确定性, 以及 (二) 优化小块常数获取功能。 为了同时解决这两个问题, 我们提议在获得模型差异估计之前先使用树群的内核解释作为高斯进程, 我们为获取功能开发一个兼容的优化配制。 后者还允许我们通过在工程环境中考虑域知识以及模型化搜索空间的组合, 例如神经结构搜索中的等级关系, 来无缝地整合已知的取样效率。 我们的框架运行以及对于连续/ 差异特性和外形方法的不协调的黑箱优化, 以竞争方法将混合地块空间和已知的输入限制结合起来。