With increasingly more hyperparameters involved in their training, machine learning systems demand a better understanding of hyperparameter tuning automation. This has raised interest in studies of provably black-box optimization, which is made more practical by better exploration mechanism implemented in algorithm design, managing the flux of both optimization and statistical errors. Prior efforts focus on delineating optimization errors, but this is deficient: black-box optimization algorithms can be inefficient without considering heterogeneity among reward samples. In this paper, we make the key delineation on the role of statistical uncertainty in black-box optimization, guiding a more efficient algorithm design. We introduce \textit{optimum-statistical collaboration}, a framework of managing the interaction between optimization error flux and statistical error flux evolving in the optimization process. Inspired by this framework, we propose the \texttt{VHCT} algorithms for objective functions with only local-smoothness assumptions. In theory, we prove our algorithm enjoys rate-optimal regret bounds; in experiments, we show the algorithm outperforms prior efforts in extensive settings.
翻译:机器学习系统在培训过程中越来越多地使用超强参数,要求更好地了解超强参数调试自动化。 这提高了人们对可变黑箱优化研究的兴趣。 通过在算法设计、管理优化和统计误差的通量方面实施更好的探索机制,使黑箱优化更加实用。 先前的努力侧重于分解优化错误, 但这是不足的: 黑箱优化算法效率低下, 而不考虑奖励样本的异质性。 在本文中, 我们对统计不确定性在黑盒优化中的作用进行关键划分, 指导一种效率更高的算法设计。 我们引入了\ textit{ optim- statistical 合作}, 这是在优化过程中管理优化错误通量和统计误差之间相互作用的框架 。 受这个框架的启发, 我们建议只使用本地间测算的客观函数使用 。 在理论上, 我们证明我们的算法享有率- 最佳遗憾界限; 在实验中, 我们展示算法在广泛的环境下比先前的努力要好。