Hyperparameter optimization is a ubiquitous challenge in machine learning, and the performance of a trained model depends crucially upon their effective selection. While a rich set of tools exist for this purpose, there are currently no practical hyperparameter selection methods under the constraint of differential privacy (DP). We study honest hyperparameter selection for differentially private machine learning, in which the process of hyperparameter tuning is accounted for in the overall privacy budget. To this end, we i) show that standard composition tools outperform more advanced techniques in many settings, ii) empirically and theoretically demonstrate an intrinsic connection between the learning rate and clipping norm hyperparameters, iii) show that adaptive optimizers like DPAdam enjoy a significant advantage in the process of honest hyperparameter tuning, and iv) draw upon novel limiting behaviour of Adam in the DP setting to design a new and more efficient optimizer.
翻译:超参数优化是机器学习中普遍存在的挑战,而经过培训的模型的性能在很大程度上取决于其有效选择。虽然为此存在大量工具,但目前没有限制不同隐私的实用超参数选择方法(DP ) 。我们研究对不同私人机器学习的诚实超参数选择,其中超参数调整过程在整个隐私预算中得到考虑。为此,我们i) 表明标准构件工具在许多环境中优于较先进的技术,二) 经验上和理论上显示学习率与剪裁标准超参数之间的内在联系,三) 表明像政治部这样的适应性优化者在诚实超参数调整过程中享有重大优势,四) 利用亚当在DP环境中的新式限制行为来设计新的、更有效的优化。