Multi-task learning (MTL) is an efficient way to improve the performance of related tasks by sharing knowledge. However, most existing MTL networks run on a single end and are not suitable for collaborative intelligence (CI) scenarios. In this work, we propose an MTL network with a deep joint source-channel coding (JSCC) framework, which allows operating under CI scenarios. We first propose a feature fusion based MTL network (FFMNet) for joint object detection and semantic segmentation. Compared with other MTL networks, FFMNet gets higher performance with fewer parameters. Then FFMNet is split into two parts, which run on a mobile device and an edge server respectively. The feature generated by the mobile device is transmitted through the wireless channel to the edge server. To reduce the transmission overhead of the intermediate feature, a deep JSCC network is designed. By combining two networks together, the whole model achieves 512 times compression for the intermediate feature and a performance loss within 2% on both tasks. At last, by training with noise, the FFMNet with JSCC is robust to various channel conditions and outperforms the separate source and channel coding scheme.
翻译:多任务学习(MTL)是通过共享知识来改进相关任务绩效的有效方法。然而,大多数现有的MTL网络都是在单一端运行的,不适合合作情报(CI)情景。在这项工作中,我们提议建立一个MTL网络,并有一个深度联合源-通道编码(JSCC)框架,允许在CI情景下运行。我们首先提出一个基于基于功能的MTL网络(FFMNet)的聚合功能,用于联合物体探测和语义分割。与其他MTL网络相比,GDINet的性能较高,参数较少。然后,HopadNet分为两个部分,分别运行在移动设备和边缘服务器上。移动设备产生的功能通过无线通道传送到边缘服务器。为减少中间功能的传输间接,我们设计了一个深度的JSCC网络。通过将两个网络结合起来,整个模型在中间特征上实现512倍的压缩,在两个任务上造成2 %的性能损失。最后,通过使用噪音培训,与JSC公司进行对流式网络的对不同频道条件和超出单独的源和频道组合计划。