Value estimation is one key problem in Reinforcement Learning. Albeit many successes have been achieved by Deep Reinforcement Learning (DRL) in different fields, the underlying structure and learning dynamics of value function, especially with complex function approximation, are not fully understood. In this paper, we report that decreasing rank of $Q$-matrix widely exists during learning process across a series of continuous control tasks for different popular algorithms. We hypothesize that the low-rank phenomenon indicates the common learning dynamics of $Q$-matrix from stochastic high dimensional space to smooth low dimensional space. Moreover, we reveal a positive correlation between value matrix rank and value estimation uncertainty. Inspired by above evidence, we propose a novel Uncertainty-Aware Low-rank Q-matrix Estimation (UA-LQE) algorithm as a general framework to facilitate the learning of value function. Through quantifying the uncertainty of state-action value estimation, we selectively erase the entries of highly uncertain values in state-action value matrix and conduct low-rank matrix reconstruction for them to recover their values. Such a reconstruction exploits the underlying structure of value matrix to improve the value approximation, thus leading to a more efficient learning process of value function. In the experiments, we evaluate the efficacy of UA-LQE in several representative OpenAI MuJoCo continuous control tasks.
翻译:价值估算是加强学习的一个关键问题。尽管深强化学习(DRL)在不同领域取得了许多成功,但价值函数的基本结构和学习动态,特别是复杂的功能近似,并没有得到完全理解。在本文件中,我们报告说,在学习过程中,不同流行算法的一系列连续控制任务中,普遍存在着降价Q$矩阵。我们假设,低级别现象表明从高维空间到平滑低维空间的共同学习动态为Q美元。此外,我们揭示了价值矩阵级别和价值估算不确定性之间的正相关关系。受以上证据的启发,我们提出一种新的不确定性-Aware Low-kirk Q-matrix Estimation(UA-LQE)算法,作为便利学习价值函数的一般框架。我们通过量化国家行动价值估算的不确定性,我们有选择地删除国家行动价值矩阵中非常不确定值的条目,并进行低级别矩阵重建,以恢复其价值。这种重建利用了基础值矩阵结构,从而利用了具有代表性的UA-L 持续价值模型,从而改进了我们不断学习价值的UQ。