Decentralized Learning (DL) is a peer--to--peer learning approach that allows a group of users to jointly train a machine learning model. To ensure correctness, DL should be robust, i.e., Byzantine users must not be able to tamper with the result of the collaboration. In this paper, we introduce two \textit{new} attacks against DL where a Byzantine user can: make the network converge to an arbitrary model of their choice, and exclude an arbitrary user from the learning process. We demonstrate our attacks' efficiency against Self--Centered Clipping, the state--of--the--art robust DL protocol. Finally, we show that the capabilities decentralization grants to Byzantine users result in decentralized learning \emph{always} providing less robustness than federated learning.
翻译:分散化学习( DL) 是一种同侪学习方法, 使用户群能够联合训练机器学习模式。 为确保正确性, DL应该是强大的, 也就是说, Byzantine 用户不能改变合作的结果 。 在本文中, 我们引入了两起针对 DL 的袭击, 而Byzantine 用户可以这样做: 使网络与他们选择的任意模式趋同, 并且将一个任意的用户排除在学习过程之外 。 我们展示了我们对自我集中式校正( 最先进的DL 协议) 的效率。 最后, 我们展示了对 Byzantine 用户的分散化拨款能力导致分散化学习 emph{ always 提供比联邦化学习更不那么强的 。</s>