Federated learning has emerged as a new paradigm of collaborative machine learning; however, many prior studies have used global aggregation along a star topology without much consideration of the communication scalability or the diurnal property relied on clients' local time variety. In contrast, ring architecture can resolve the scalability issue and even satisfy the diurnal property by iterating nodes without an aggregation. Nevertheless, such ring-based algorithms can inherently suffer from the high-variance problem. To this end, we propose a novel algorithm called TornadoAggregate that improves both accuracy and scalability by facilitating the ring architecture. In particular, to improve the accuracy, we reformulate the loss minimization into a variance reduction problem and establish three principles to reduce variance: Ring-Aware Grouping, Small Ring, and Ring Chaining. Experimental results show that TornadoAggregate improved the test accuracy by up to 26.7% and achieved near-linear scalability.
翻译:联邦学习已成为合作机器学习的新范例;然而,许多先前的研究在使用恒星表层的全球汇总时,没有考虑到通信可缩缩性或依赖客户当地时间差异的二元属性,而没有考虑到通信可缩缩性或二元属性,相反,环形结构可以解决缩缩缩问题,甚至通过不加在一起的迭代节点满足二元属性。然而,这种环形算法本身会受到高差异问题的影响。为此,我们提议采用名为“龙卷风算法”的新式算法,通过促进环形结构提高准确性和伸缩性。特别是为了提高准确性,我们将损失最小化重新定位为减少差异的问题,并制定减少差异的三项原则:环形系统组合、小环形和环链。实验结果表明,“龙卷式算法”提高了测试精度,达到26.7%,并实现了近线性缩度。