Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data. FL is a promising framework with practical applications, but its standard training paradigm requires the clients to backpropagate through the model to compute gradients. Since these clients are typically edge devices and not fully trusted, executing backpropagation on them incurs computational and storage overhead as well as white-box vulnerability. In light of this, we develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients. BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server. Empirically we use BAFFLE to train deep models from scratch or to finetune pretrained models, achieving acceptable results. Code is available in https://github.com/FengHZ/BAFFLE.
翻译:联邦学习(FL)是分散化客户集体培训服务器模型而又不共享本地数据的一项一般原则。 FL是一个充满希望的框架,具有实用性,但标准培训模式要求客户通过模型进行后推,以计算梯度。由于这些客户通常是边缘装置,不完全信任,因此对客户执行后推,产生计算和存储管理费用以及白箱脆弱性。有鉴于此,我们开发了无后推法的Federal化学习,称为BAFFLE,其中后推法被多个前期流程取代,以估算梯度。BAFFLE是:(1) 内存效率高且易于适应上传带宽;(2) 与仅推断的硬件优化和模型量化或调整兼容;(3) 完全适合可信任的执行环境,因为BAFFLE的客户只执行前推传播和返回服务器的一套缩略图。我们利用BAFFLE来训练深层模型,从抓取,或调整前导型模型,从而取得可接受的结果。 https://github.com/FengHFF/LE/redustratedal。