Decentralized federated learning (DFL) enables collaborative model training across edge devices without centralized coordination, offering resilience against single points of failure. However, statistical heterogeneity arising from non-identically distributed local data creates a fundamental challenge: nodes must learn personalized models adapted to their local distributions while selectively collaborating with compatible peers. Existing approaches either enforce a single global model that fits no one well, or rely on heuristic peer selection mechanisms that cannot distinguish between peers with genuinely incompatible data distributions and those with valuable complementary knowledge. We present Murmura, a framework that leverages evidential deep learning to enable trust-aware model personalization in DFL. Our key insight is that epistemic uncertainty from Dirichlet-based evidential models directly indicates peer compatibility: high epistemic uncertainty when a peer's model evaluates local data reveals distributional mismatch, enabling nodes to exclude incompatible influence while maintaining personalized models through selective collaboration. Murmura introduces a trust-aware aggregation mechanism that computes peer compatibility scores through cross-evaluation on local validation samples and personalizes model aggregation based on evidential trust with adaptive thresholds. Evaluation on three wearable IoT datasets (UCI HAR, PAMAP2, PPG-DaLiA) demonstrates that Murmura reduces performance degradation from IID to non-IID conditions compared to baseline (0.9% vs. 19.3%), achieves 7.4$\times$ faster convergence, and maintains stable accuracy across hyperparameter choices. These results establish evidential uncertainty as a principled foundation for compatibility-aware personalization in decentralized heterogeneous environments.
翻译:去中心化联邦学习(DFL)使得边缘设备能够在无需中心协调的情况下进行协作模型训练,从而有效避免单点故障风险。然而,由于本地数据非独立同分布而产生的统计异质性带来了根本性挑战:节点必须学习适应其本地分布的个性化模型,同时需要选择性地与兼容节点进行协作。现有方法要么强制使用单一全局模型(导致模型无法适配任何节点),要么依赖启发式的节点选择机制(无法区分数据分布本质上不兼容的节点与具有宝贵互补知识的节点)。本文提出Murmura框架,该框架利用证据深度学习实现DFL中基于信任感知的模型个性化。我们的核心洞见在于:基于狄利克雷分布的证据模型所产生的认知不确定性可直接反映节点兼容性——当对等节点模型评估本地数据时表现出的高认知不确定性揭示了分布不匹配问题,从而使节点能够在排除不兼容影响的同时,通过选择性协作保持个性化模型。Murmura提出了一种信任感知聚合机制,通过在本地验证样本上进行交叉评估来计算节点兼容性分数,并基于证据信任与自适应阈值实现个性化的模型聚合。在三个可穿戴物联网数据集(UCI HAR、PAMAP2、PPG-DaLiA)上的实验表明:与基线方法相比,Murmura将非独立同分布条件下的性能衰减从19.3%降低至0.9%,收敛速度提升7.4倍,且在不同超参数选择下均能保持稳定的准确率。这些结果证实了证据不确定性可作为去中心化异构环境中兼容性感知个性化方法的理论基础。