The lottery ticket hypothesis (LTH) claims that a deep neural network (i.e., ground network) contains a number of subnetworks (i.e., winning tickets), each of which exhibiting identically accurate inference capability as that of the ground network. Federated learning (FL) has recently been applied in LotteryFL to discover such winning tickets in a distributed way, showing higher accuracy multi-task learning than Vanilla FL. Nonetheless, LotteryFL relies on unicast transmission on the downlink, and ignores mitigating stragglers, questioning scalability. Motivated by this, in this article we propose a personalized and communication-efficient federated lottery ticket learning algorithm, coined CELL, which exploits downlink broadcast for communication efficiency. Furthermore, it utilizes a novel user grouping method, thereby alternating between FL and lottery learning to mitigate stragglers. Numerical simulations validate that CELL achieves up to 3.6% higher personalized task classification accuracy with 4.3x smaller total communication cost until convergence under the CIFAR-10 dataset.
翻译:彩票假设(LTH)声称,一个深神经网络(即地面网络)包含一些子网络(即赢票),每个子网络都表现出与地面网络的准确推断能力。彩票假设(FL)最近被应用在彩票法中,以分布方式发现这种赢票,显示比Vanilla FL更准确的多任务学习。然而,彩票法依赖下行链的单子传输,忽略了降低排减器、质疑可扩缩性。根据这一条,我们建议采用个性化和通信高效的联结彩票学习算法,即CELL,利用下行连接广播提高通信效率。此外,它使用一种新颖的用户组合法,从而将FL和彩票学习交替起来,以缓解排行器。营养模拟证实CELL达到3.6%的更高个人化任务分类精度,在CIFAR-10数据集下整合之前,通信总成本将降低4.3x。