In this work, we consider the problem of designing secure and efficient federated learning (FL) frameworks. Existing solutions either involve a trusted aggregator or require heavyweight cryptographic primitives, which degrades performance significantly. Moreover, many existing secure FL designs work only under the restrictive assumption that none of the clients can be dropped out from the training protocol. To tackle these problems, we propose SEFL, a secure and efficient FL framework that (1) eliminates the need for the trusted entities; (2) achieves similar and even better model accuracy compared with existing FL designs; (3) is resilient to client dropouts. Through extensive experimental studies on natural language processing (NLP) tasks, we demonstrate that the SEFL achieves comparable accuracy compared to existing FL solutions, and the proposed pruning technique can improve runtime performance up to 13.7x.
翻译:在这项工作中,我们考虑了设计安全和高效的联邦学习框架的问题;现有的解决办法要么涉及一个可信赖的聚合器,要么需要重量级加密原始材料,从而显著地降低业绩;此外,许多现有的安全FL设计工作只能建立在严格的假设下,即没有一个客户可以退出培训协议;为了解决这些问题,我们建议SELL,一个安全和高效的FL框架,以便(1) 消除对受信任实体的需要;(2) 与现有的FL设计相比,实现类似甚至更好的模型准确性;(3) 能够适应客户的辍学情况;通过对自然语言处理(NLP)任务进行广泛的实验研究,我们证明SELL的准确性与现有的FL解决方案相当,而拟议的修剪技术可以将运行时间提高到13.7x。