We study the problem of training personalized deep learning models in a decentralized peer-to-peer setting, focusing on the setting where data distributions differ between the clients and where different clients have different local learning tasks. We study both covariate and label shift, and our contribution is an algorithm which for each client finds beneficial collaborations based on a similarity estimate for the local task. Our method does not rely on hyperparameters which are hard to estimate, such as the number of client clusters, but rather continuously adapts to the network topology using soft cluster assignment based on a novel adaptive gossip algorithm. We test the proposed method in various settings where data is not independent and identically distributed among the clients. The experimental evaluation shows that the proposed method performs better than previous state-of-the-art algorithms for this problem setting, and handles situations well where previous methods fail.
翻译:我们研究在分散的同侪对同侪环境下培训个性化深层次学习模型的问题,侧重于不同客户之间数据分布不同和不同客户有不同本地学习任务的设置。我们研究共变和标签转换,我们的贡献是一种算法,每个客户都根据对当地任务的类似性估计发现有益的合作。我们的方法并不依赖难以估计的超参数,如客户群数量,而是不断利用基于新颖的适应性八卦算法的软集群任务适应网络结构。我们在数据不独立且在客户之间分布相同的不同环境中测试拟议方法。实验性评估表明,提议的方法比以前最先进的算法运行得更好,在以往方法失败的情况下处理好。