Globally, Skin carcinoma is among the most lethal diseases. Millions of people are diagnosed with this cancer every year. Sill, early detection can decrease the medication cost and mortality rate substantially. The recent improvement in automated cancer classification using deep learning methods has reached a human-level performance requiring a large amount of annotated data assembled in one location, yet, finding such conditions usually is not feasible. Recently, federated learning (FL) has been proposed to train decentralized models in a privacy-preserved fashion depending on labeled data at the client-side, which is usually not available and costly. To address this, we propose FedPerl, a semi-supervised federated learning method. Our method is inspired by peer learning from educational psychology and ensemble averaging from committee machines. FedPerl builds communities based on clients' similarities. Then it encourages communities members to learn from each other to generate more accurate pseudo labels for the unlabeled data. We also proposed the peer anonymization (PA) technique to improve privacy. As a core component of our method, PA is orthogonal to other methods without additional complexity and reduces the communication cost while enhancing performance. Finally, we propose a dynamic peer-learning policy that controls the learning stream to avoid any degradation in the performance, especially for the individual clients. Our experimental setup consists of 71,000 skin lesion images collected from 5 publicly available datasets. With few annotated data, FedPerl outperforms state-of-the-art SSFLs and the baselines by 1.8% and 15.8%, respectively. Also, it generalizes better to an unseen client while being less sensitive to noisy ones.
翻译:皮肤癌是全球最致命的疾病之一。 每年有数百万人被诊断患有这种癌症。 Sill, 早期检测可以大幅降低药物成本和死亡率。 最近使用深层学习方法进行的自动化癌症分类的改进已经达到人性层面的性能,需要在一个地点收集大量附加数据,然而,发现这种条件通常并不可行。 最近, 联邦学习( FL) 提议根据客户方的标签数据, 以保密的方式对分散模式进行培训, 这些数据通常不可用, 并且费用昂贵。 为了解决这个问题, 我们建议FedPerl, 一种半监督的联邦化学习方法。 我们的方法是: 半监督的联邦化的, 一种半监督的联邦化的学习方法。 我们的方法是, 从教育心理学的同侪学习和从委员会机器的平均合用词学习。 FedPerl根据客户的相似性能建立社区。 然后,它鼓励社区成员相互学习更准确的假标签。 我们还建议同行匿名(PA) 来改善隐私。 作为我们方法的核心部分, PA是将其他方法分为几种不具有额外复杂性的、半监督性的联式的学习方法。 我们建议, 降低一个实验性数据, 常规数据, 并且降低一个实验性数据, 学习模式, 并且降低我们的客户方的正常性数据, 学习成本。