Deploying machine learning models requires high model quality and needs to comply with application constraints. That motivates hyperparameter optimization (HPO) to tune model configurations under deployment constraints. The constraints often require additional computation cost to evaluate, and training ineligible configurations can waste a large amount of tuning cost. In this work, we propose an Adaptive Constraint-aware Early stopping (ACE) method to incorporate constraint evaluation into trial pruning during HPO. To minimize the overall optimization cost, ACE estimates the cost-effective constraint evaluation interval based on a theoretical analysis of the expected evaluation cost. Meanwhile, we propose a stratum early stopping criterion in ACE, which considers both optimization and constraint metrics in pruning and does not require regularization hyperparameters. Our experiments demonstrate superior performance of ACE in hyperparameter tuning of classification tasks under fairness or robustness constraints.
翻译:部署机器学习模型需要高模型质量,并且需要遵守应用限制。这促使超参数优化(HPO)在部署限制下调整模型配置。这些限制往往需要额外的计算成本来进行评估,而培训不合格的配置可能浪费大量的调试费用。在这项工作中,我们建议采用适应性约束意识早期停用(ACE)方法,将限制评价纳入应用限制测试期间的试运行中。为了最大限度地降低总体优化成本,ACE根据对预期评估成本的理论分析估算了成本效益高的限制评价间隔。同时,我们提议在ACE中采用直流早期停用标准,该标准既考虑优化,也考虑修剪裁中的制约度度度量度,不需要正规化超光谱计。我们的实验表明ACE在超光度调整受公平或稳健度制约的分类任务方面表现优异。