Deep neural architectures have profound impact on achieved performance in many of today's AI tasks, yet, their design still heavily relies on human prior knowledge and experience. Neural architecture search (NAS) together with hyperparameter optimization (HO) helps to reduce this dependence. However, state of the art NAS and HO rapidly become infeasible with increasing amount of data being stored in a distributed fashion, typically violating data privacy regulations such as GDPR and CCPA. As a remedy, we introduce FEATHERS - $\textbf{FE}$derated $\textbf{A}$rchi$\textbf{T}$ecture and $\textbf{H}$yp$\textbf{ER}$parameter $\textbf{S}$earch, a method that not only optimizes both neural architectures and optimization-related hyperparameters jointly in distributed data settings, but further adheres to data privacy through the use of differential privacy (DP). We show that FEATHERS efficiently optimizes architectural and optimization-related hyperparameters alike, while demonstrating convergence on classification tasks at no detriment to model performance when complying with privacy constraints.
翻译:深心神经结构对当今许多AI任务中实现的绩效产生了深刻影响,然而,它们的设计仍然在很大程度上依赖于人类先前的知识和经验。神经结构搜索(NAS)加上超参数优化(HO)有助于减少这种依赖性。然而,随着数据以分布方式储存的数据数量不断增加,NAS和HO的状态迅速变得不可行,这种方法不仅在分布式数据环境中优化了神经结构和与优化有关的超光度计,而且不仅在分布式数据环境中联合优化了神经结构和与优化有关的超光度计,而且通过使用不同隐私进一步遵守数据隐私,我们展示了FEASTES高效优化建筑和与优化相关的超光度计,同时在遵守保密性能要求时,在遵守保密性能要求时表现出一致性。</s>