We introduce Constraint-Aware Federated Learning with Lagrangian Dual Optimization (CAFL-L), a principled extension of FedAvg that explicitly incorporates device-level resource constraints including energy, communication, memory, and thermal budgets. CAFL-L employs Lagrangian dual optimization to dynamically adapt training hyperparameters -- freezing depth, local steps, batch size, and communication compression -- while preserving training stability through token-budget preservation via gradient accumulation. Experiments on a character-level language model demonstrate that CAFL-L achieves superior constraint satisfaction compared to standard FedAvg (reducing memory usage by 20% and communication by 95%) while maintaining competitive validation performance, making it practical for deployment on resource-constrained edge devices.
翻译:本文提出约束感知联邦学习及其拉格朗日对偶优化方法(CAFL-L),该方法是FedAvg的原则性扩展,显式地纳入了设备层级的资源约束,包括能耗、通信、内存与热预算。CAFL-L采用拉格朗日对偶优化技术,动态调整训练超参数——冻结深度、本地步数、批次大小与通信压缩——同时通过梯度累积保持令牌预算,从而维持训练稳定性。在字符级语言模型上的实验表明,相较于标准FedAvg,CAFL-L在保持竞争力的验证性能的同时,实现了更优的约束满足度(内存使用降低20%,通信量减少95%),使其适用于资源受限的边缘设备部署。