This paper introduces a dual-critic reinforcement learning (RL) framework to address the problem of frame-level bit allocation in HEVC/H.265. The objective is to minimize the distortion of a group of pictures (GOP) under a rate constraint. Previous RL-based methods tackle such a constrained optimization problem by maximizing a single reward function that often combines a distortion and a rate reward. However, the way how these rewards are combined is usually ad hoc and may not generalize well to various coding conditions and video sequences. To overcome this issue, we adapt the deep deterministic policy gradient (DDPG) reinforcement learning algorithm for use with two critics, with one learning to predict the distortion reward and the other the rate reward. In particular, the distortion critic works to update the agent when the rate constraint is satisfied. By contrast, the rate critic makes the rate constraint a priority when the agent goes over the bit budget. Experimental results on commonly used datasets show that our method outperforms the bit allocation scheme in x265 and the single-critic baseline by a significant margin in terms of rate-distortion performance while offering fairly precise rate control.
翻译:本文介绍了一个双曲线强化学习(RL)框架,以解决HEVC/H.265中框架水平比分分配问题。目标是在利率限制下尽量减少一组图片(GOP)的扭曲。以前以RL为基础的方法通过最大限度地提高单一奖励功能来解决这种有限的优化问题,该功能往往将扭曲和利率奖励结合起来。然而,这些奖励的组合方式通常是临时性的,可能不及于各种编码条件和视频序列。为了解决这一问题,我们调整了深度确定性政策梯度(DDPG)强化学习算法,供两个批评者使用,其中一个是学习预测扭曲性奖励,另一个是利率奖励。特别是,扭曲性批评者在利率限制得到满足时努力更新代理商。相比之下,比率批评者将利率限制作为代理商超过比特预算时的优先事项。通常使用的数据集的实验结果显示,我们的方法比x265中的比分配办法和单曲线基线差很多,同时提供相当精确的利率控制。