Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a model-agnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V.
翻译:自2016年启动以来,联邦学习联合会(FL)与深神经网络(DNNS)的内部运行连接起来。一方面,它允许发展并随着DNNs的激增而广泛使用。另一方面,它忽略了使用DNNs是不可能的或有利的所有情景。目前大多数FL框架只允许培训DNNs,这一事实使这一问题更加严重。为了解决非DNN使用案例缺乏FL解决方案的问题,我们建议MALF(Mdel-Agnocistic Fedal Learning) 。MAFLL(MFL) 采用一个模型性FL算法,AdaBoost.F,并有一个开放的行业级FL框架:Intel OpenFLF。MALL是第一个没有与任何特定类型的机器学习模型挂钩的FL系统,允许探索DNNS和树木以外的FL假想。我们从多个角度测试MALFL,评估其正确性、灵活性和规模可达64节。我们选择了基础软件,在标准的FLV情景上实现5.5x速度。MAL8和ARV。</s>