Bayesian optimization (BO) is a powerful approach to sample-efficient optimization of black-box objective functions. However, the application of BO to areas such as recommendation systems often requires taking the interpretability and simplicity of the configurations into consideration, a setting that has not been previously studied in the BO literature. To make BO useful for this setting, we present several regularization-based approaches that allow us to discover sparse and more interpretable configurations. We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity by working directly with $L_0$ regularization. We identify failure modes for regularized BO and develop a hyperparameter-free method, sparsity exploring Bayesian optimization (SEBO) that seeks to simultaneously maximize a target objective and sparsity. SEBO and methods based on fixed regularization are evaluated on synthetic and real-world problems, and we show that we are able to efficiently optimize for sparsity.
翻译:Bayesian优化(BO)是优化黑盒客观功能的样本效率的有力方法,然而,将BO应用于建议系统等领域,往往需要考虑到配置的可解释性和简单性,这是BO文献中以前没有研究过的一种环境。为了使BO对这个环境有用,我们提出了几种基于规范化的方法,使我们能够发现稀有和更易解释的配置。我们提议基于同质性连续性的新颖的、不同的放松,通过直接使用$L_0的规范化,有可能针对松散性。我们找出了规范化的BO的失败模式,并开发了一种超单数无法的方法,探索Bayesian优化(SEBO),力求同时实现目标目标和宽度最大化。SEBO和基于固定规范化的方法根据合成和现实世界的问题进行评估,我们表明我们能够有效地优化聚变。</s>