The ever-growing demand and complexity of machine learning are putting pressure on hyper-parameter tuning systems: while the evaluation cost of models continues to increase, the scalability of state-of-the-arts starts to become a crucial bottleneck. In this paper, inspired by our experience when deploying hyper-parameter tuning in a real-world application in production and the limitations of existing systems, we propose Hyper-Tune, an efficient and robust distributed hyper-parameter tuning framework. Compared with existing systems, Hyper-Tune highlights multiple system optimizations, including (1) automatic resource allocation, (2) asynchronous scheduling, and (3) multi-fidelity optimizer. We conduct extensive evaluations on benchmark datasets and a large-scale real-world dataset in production. Empirically, with the aid of these optimizations, Hyper-Tune outperforms competitive hyper-parameter tuning systems on a wide range of scenarios, including XGBoost, CNN, RNN, and some architectural hyper-parameters for neural networks. Compared with the state-of-the-art BOHB and A-BOHB, Hyper-Tune achieves up to 11.2x and 5.1x speedups, respectively.
翻译:机器学习不断增长的需求和复杂性正在对超参数调制系统造成压力:在模型的评估成本继续增加的同时,最新艺术的可缩缩性开始成为关键的瓶颈。在本文中,根据我们在生产中实际应用超参数调试现实世界的经验和现有系统的局限性,我们提议超标准,这是一个高效和强有力的分布式超参数调制框架。与现有系统相比,超标准突出多重系统优化,包括:(1) 自动资源分配,(2) 同步列表,(3) 多纤维化优化。我们广泛评价基准数据集和生产中的大规模真实世界数据集。在这种优化的帮助下,超标准比高标准在包括XGBoost、CNN、RNNNN和神经网络的一些建筑性超标准。与BOHB和A-B-B11-11和A-B10的速度相比,超标准超标准超标准调制超标准调制系统在广泛的假设情景上,包括XGBoost、CNN、RNNN和神经网络的一些建筑性超标准。