Point cloud video transmission is challenging due to high encoding/decoding complexity, high video bitrate, and low latency requirement. Consequently, conventional adaptive streaming methodologies often find themselves unsatisfactory to meet the requirements in threefold: 1) current algorithms reuse existing quality of experience (QoE) definitions while overlooking the unique features of point cloud video thus failing to provide optimal user experience, 2) most deep learning approaches require long-span data collections to learn sufficiently varied network conditions and result in long training period and capacity occupation, 3) cloud training approaches pose privacy risks caused by leakage of user reported service usage and networking conditions. To overcome the limitations, we present FRAS, the first federated reinforcement learning framework, to the best of our knowledge, for adaptive point cloud video streaming. We define a new QoE model which takes the unique features of point cloud video into account. Each client uses reinforcement learning (RL) to train encoding rate selection with the objective of optimizing the user's QoE under multiple constraints. Then, a federated learning framework is integrated with the RL algorithm to enhance training performance with privacy preservation. Extensive simulations using real point cloud videos and network traces reveal the superiority of the proposed scheme over baseline schemes. We also implement a prototype that demonstrates the performance of FRAS via real-world tests.
翻译:由于高编码/编码复杂、高视频比特率和低延迟要求,云点视频传输具有挑战性,因为高编码/编码复杂性、高视频比特率和低延迟要求,云点视频传输具有挑战性。因此,传统的适应性流方法往往不能满足三重要求:1)目前的算法重新利用现有经验质量定义,同时忽略点云视频的独特特点,从而无法提供最佳用户经验,2)大多数深层学习方法需要长宽的数据收集,以学习充分不同的网络条件,并导致长期的培训时间和能力占用;3)云层培训方法由于用户报告的服务使用和联网条件的泄漏而带来隐私风险。为了克服这些限制,我们提出了FRAS,即第一个联合强化学习框架,以最佳的知识为基础,用于适应点云层视频流。我们定义了一个新的QOE模型,其中考虑到点云点视频视频视频的独特特点,从而无法提供最佳的用户经验。每个客户都使用强化学习(RL)来培训编码率选择,目的是在多重制约下优化用户的QoE。然后,将一个联合学习框架与RL算法结合,以提高隐私保护培训业绩绩效。我们用真正的云点图像模型进行真正的模拟,同时进行真正的测试,我们还演示了实时测试。