Deep neural architectures have profound impact on achieved performance in many of today's AI tasks, yet, their design still heavily relies on human prior knowledge and experience. Neural architecture search (NAS) together with hyperparameter optimization (HO) helps to reduce this dependence. However, state of the art NAS and HO rapidly become infeasible with increasing amount of data being stored in a distributed fashion, typically violating data privacy regulations such as GDPR and CCPA. As a remedy, we introduce FEATHERS - $\textbf{FE}$derated $\textbf{A}$rchi$\textbf{T}$ecture and $\textbf{H}$yp$\textbf{ER}$parameter $\textbf{S}$earch, a method that not only optimizes both neural architectures and optimization-related hyperparameters jointly in distributed data settings, but further adheres to data privacy through the use of differential privacy (DP). We show that FEATHERS efficiently optimizes architectural and optimization-related hyperparameters alike, while demonstrating convergence on classification tasks at no detriment to model performance when complying with privacy constraints.
翻译:深度神经网络对当今许多AI任务的性能产生了深远影响,然而,它们的设计仍然严重依赖于先前的人类知识和经验。神经体系结构搜索(NAS)与超参数优化(HO)有助于减少这种依赖。然而,随着分布式数据存储量的增加,这两种领先的NAS和HO方法很快变得不可行,通常违反诸如GDPR和CCPA等数据隐私法规。为此,我们引入了FEATHERS – $\textbf{FE}$derated $\textbf{A}$rchi$\textbf{T}$ecture和$\textbf{H}$yp$\textbf{ER}$parameter$\textbf{S}$earch (分布式架构与超参数搜索),这种方法不仅可以在分布式数据设置中同时优化神经架构和与优化相关的超参数,而且还通过差分隐私(DP)的使用遵守数据隐私。我们展示FEATHERS可以高效地优化体系结构和与优化相关的超参数,在遵守隐私约束的情况下,在分类任务上证明收敛性,而不会对模型性能造成任何损害。