Personalized collaborative learning in federated settings faces a critical trade-off between customization and participant trust. Existing approaches typically rely on centralized coordinators or trusted peer groups, limiting their applicability in open, trust-averse environments. While recent decentralized methods explore anonymous knowledge sharing, they often lack global scalability and robust mechanisms against malicious peers. To bridge this gap, we propose TPFed, a \textit{Trust-free Personalized Decentralized Federated Learning} framework. TPFed replaces central aggregators with a blockchain-based bulletin board, enabling participants to dynamically select global communication partners based on Locality-Sensitive Hashing (LSH) and peer ranking. Crucially, we introduce an ``all-in-one'' knowledge distillation protocol that simultaneously handles knowledge transfer, model quality evaluation, and similarity verification via a public reference dataset. This design ensures secure, globally personalized collaboration without exposing local models or data. Extensive experiments demonstrate that TPFed significantly outperforms traditional federated baselines in both learning accuracy and system robustness against adversarial attacks.
翻译:联邦环境下的个性化协同学习面临定制化与参与者信任之间的关键权衡。现有方法通常依赖中心化协调者或可信对等组,限制了其在开放、规避信任环境中的适用性。尽管近期的去中心化方法探索了匿名知识共享,但往往缺乏全局可扩展性及抵御恶意对等节点的鲁棒机制。为弥合这一差距,我们提出了TPFed——一个\textit{无需信任的个性化去中心化联邦学习}框架。TPFed采用基于区块链的公告板替代中心化聚合器,使参与者能够基于局部敏感哈希(LSH)和对等节点排名动态选择全局通信伙伴。关键创新在于引入了一种“一体化”知识蒸馏协议,该协议通过公共参考数据集同步处理知识迁移、模型质量评估和相似性验证。该设计确保了安全、全局个性化的协作,同时无需暴露本地模型或数据。大量实验表明,TPFed在学习准确性和抵御对抗攻击的系统鲁棒性方面均显著优于传统联邦基线方法。