This paper investigates task-oriented communication for multi-device cooperative edge inference, where a group of distributed low-end edge devices transmit the extracted features of local samples to a powerful edge server for inference. While cooperative edge inference can overcome the limited sensing capability of a single device, it substantially increases the communication overhead and may incur excessive latency. To enable low-latency cooperative inference, we propose a learning-based communication scheme that optimizes local feature extraction and distributed feature encoding in a task-oriented manner, i.e., to remove data redundancy and transmit information that is essential for the downstream inference task rather than reconstructing the data samples at the edge server. Specifically, we leverage an information bottleneck (IB) principle to extract the task-relevant feature at each edge device and adopt a distributed information bottleneck (DIB) framework to formalize a single-letter characterization of the optimal rate-relevance tradeoff for distributed feature encoding. To admit flexible control of the communication overhead, we extend the DIB framework to a distributed deterministic information bottleneck (DDIB) objective that explicitly incorporates the representational costs of the encoded features. As the IB-based objectives are computationally prohibitive for high-dimensional data, we adopt variational approximations to make the optimization problems tractable. To compensate the potential performance loss due to the variational approximations, we also develop a selective retransmission (SR) mechanism to identify the redundancy in the encoded features of multiple edge devices to attain additional communication overhead reduction. Extensive experiments evidence that the proposed task-oriented communication scheme achieves a better rate-relevance tradeoff than baseline methods.
翻译:本文调查多设备合作边缘推断的面向任务的通信,即一组分布式低端边缘设备将当地样品的提取特征传输到一个强大的边缘服务器,以便进行推断。虽然合作边缘推断可以克服单一装置的有限感知能力,但可大大增加通信间接费用,并可能造成过度潜伏。为了能够进行低远程合作推断,我们提议一个基于学习的通信计划,以面向任务的方式优化本地特征提取和分发特征编码,即,清除数据冗余,传递信息,这是下游推断任务所必不可少的,而不是在边缘服务器重建数据样品所必不可少的。具体地说,我们利用信息边缘推断原则来提取每个装置的有限感知能力,从而大大增加通信间接费用,采用分布式信息瓶颈框架来正式确定对分配特征校正的最佳比率偏差权衡。为了承认对通信间接费用的灵活控制,我们将DIB框架扩展至一个分布式信息瓶颈(DIB),目标是明确将高清晰的精度精度成本纳入每个边端设备中与任务相关的特性,从而确定高清晰度成本的精确度估算,我们将优化的精确度估算性估算性成本,我们又将优化地计算了高度的精确度计算。