Federated edge learning (FEEL) provides a promising foundation for edge artificial intelligence (AI) by enabling collaborative model training while preserving data privacy. However, limited and heterogeneous local datasets, as well as resource-constrained deployment, severely degrade both model generalization and resource utilization, leading to a compromised learning performance. Therefore, we propose a parameter-efficient FEEL framework that jointly leverages model pruning and client selection to tackle such challenges. First, we derive an information-theoretic generalization statement that characterizes the discrepancy between training and testing function losses and embed it into the convergence analysis. It reveals that a larger local generalization statement can undermine the global convergence. Then, we formulate a generalization-aware average squared gradient norm bound minimization problem, by jointly optimizing the pruning ratios, client selection, and communication-computation resources under energy and delay constraints. Despite its non-convexity, the resulting mixed-integer problem is efficiently solved via an alternating optimization algorithm. Extensive experiments demonstrate that the proposed design achieves superior learning performance than state-of-the-art baselines, validating the effectiveness of coupling generalization-aware analysis with system-level optimization for efficient FEEL.
翻译:联邦边缘学习(FEEL)通过实现协作模型训练同时保护数据隐私,为边缘人工智能(AI)提供了有前景的基础。然而,有限且异构的本地数据集以及资源受限的部署环境,严重降低了模型的泛化能力和资源利用率,导致学习性能受损。为此,我们提出了一种参数高效的FEEL框架,联合利用模型剪枝和客户端选择来应对这些挑战。首先,我们推导了一个信息论泛化界,用于刻画训练与测试函数损失之间的差异,并将其嵌入收敛分析中。该分析表明,较大的局部泛化界可能损害全局收敛性。接着,我们构建了一个泛化感知的平均平方梯度范数界最小化问题,通过在能量和延迟约束下联合优化剪枝率、客户端选择以及通信-计算资源。尽管问题非凸,但由此产生的混合整数问题可通过交替优化算法高效求解。大量实验表明,所提设计在多个基准测试中实现了优于现有先进方法的学习性能,验证了将泛化感知分析与系统级优化相结合对于高效FEEL的有效性。