We propose a novel training recipe for federated learning with heterogeneous networks where each device can have different architectures. We introduce training with a side objective to the devices of higher complexities to jointly train different architectures in a federated setting. We empirically show that our approach improves the performance of different architectures and leads to high communication savings compared to the state-of-the-art methods.
翻译:我们提出一种新的培训方法,用于与各种网络进行联合学习,其中每个设备都可以有不同的结构。 我们引入培训,其附带目标与复杂程度更高的设备有关,在联合环境下联合培训不同的结构。 我们的经验表明,我们的方法改善了不同结构的性能,并导致与最先进的方法相比,通信节约了很高的费用。