This letter studies a vertical federated edge learning (FEEL) system for collaborative objects/human motion recognition by exploiting the distributed integrated sensing and communication (ISAC). In this system, distributed edge devices first send wireless signals to sense targeted objects/human, and then exchange intermediate computed vectors (instead of raw sensing data) for collaborative recognition while preserving data privacy. To boost the spectrum and hardware utilization efficiency for FEEL, we exploit ISAC for both target sensing and data exchange, by employing dedicated frequency-modulated continuous-wave (FMCW) signals at each edge device. Under this setup, we propose a vertical FEEL framework for realizing the recognition based on the collected multi-view wireless sensing data. In this framework, each edge device owns an individual local L-model to transform its sensing data into an intermediate vector with relatively low dimensions, which is then transmitted to a coordinating edge device for final output via a common downstream S-model. By considering a human motion recognition task, experimental results show that our vertical FEEL based approach achieves recognition accuracy up to 98\% with an improvement up to 8\% compared to the benchmarks, including on-device training and horizontal FEEL.
翻译:本信研究协作物体/人类运动的纵向联合边际学习系统(FEEL),通过利用分布式综合遥感和通信(ISAC)对合作物体/人类运动进行识别。在这个系统中,分布式边际装置首先发出无线信号,以感知目标物体/人类,然后交换中间计算矢量(而不是原始遥感数据),以便合作识别,同时保护数据隐私。为了提高频谱和硬件利用效率,我们利用ISAC进行目标感知和数据交换,在每个边际装置使用专用频率调控连续波信号。在这个设置下,我们提出了一个垂直感觉框架,以便根据收集的多视无线感测数据实现识别。在这个框架中,每个边际装置拥有一个单独的本地L模型,将其感测数据转换为具有相对低维度的中间矢量的中间矢量,然后通过一个共同的下游S模型传输给一个协调边端设备,最终输出。通过考虑一项人类运动识别任务,实验结果显示,我们基于垂直感觉的方法达到了98- ⁇ 的准确度,比基准提高到8 ⁇ 。