When tuning the architecture and hyperparameters of large machine learning models for on-device deployment, it is desirable to understand the optimal trade-offs between on-device latency and model accuracy. In this work, we leverage recent methodological advances in Bayesian optimization over high-dimensional search spaces and multi-objective Bayesian optimization to efficiently explore these trade-offs for a production-scale on-device natural language understanding model at Facebook.
翻译:在调整大型机床学习模型的架构和超强参数以用于设备部署时,最好能理解在设备顶部和模型准确性之间的最佳权衡。 在这项工作中,我们利用最近在高维搜索空间和多目标巴耶斯优化方法上取得的进展,有效地探索这些权衡,以便在Facebook上建立一个生产规模的、设备顶部的自然语言理解模型。