We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., \emph{value generalization among policies}. We formally analyze the value generalization under Generalized Policy Iteration (GPI). From theoretical and empirical lens, we show that generalized value estimates offered by PeVFA may have lower initial approximation error to true values of successive policies, which is expected to improve consecutive value approximation during GPI. Based on above clues, we introduce a new form of GPI with PeVFA which leverages the value generalization along policy improvement path. Moreover, we propose a representation learning framework for RL policy, providing several approaches to learn effective policy embeddings from policy network parameters or state-action pairs. In our experiments, we evaluate the efficacy of value generalization offered by PeVFA and policy representation learning in several OpenAI Gym continuous control tasks. For a representative instance of algorithm implementation, Proximal Policy Optimization (PPO) re-implemented under the paradigm of GPI with PeVFA achieves about 40\% performance improvement on its vanilla counterpart in most environments.
翻译:我们研究了强化学习(RL)中政策扩展值功能比值应用(PeVFA),它扩大了常规值功能比值(VFA),不仅将常规值功能比值(VFA)作为国家(和行动)的投入,而且作为明确的政策代表。这样的扩展使得PeVFA能够同时保存多种政策的价值观,并带来一个吸引人的特征,即:在政策改进过程中,将价值普遍化作为工具。我们正式分析了通用政策解释(GPI)下的价值概括化。从理论和实验角度来看,我们表明PeVFA提供的普遍值估计可能比连续政策的真正值(VFA)的最初近似误差要低一些,预期会改善GPI的连续价值近似值。基于以上线索,我们引入了一种新的与PeVFA的GPI,在政策改进过程中利用了价值的概括化。此外,我们建议了RL政策的代表学习框架,为从政策网络参数或州行动配方学习有效的政策嵌入政策。我们实验中,我们评估了PVFAFA最具有代表性的范式的范式执行(POIADI)在持续控制中,我们评估了PAFADA的范PA的范A的范式环境上提供的范式的范式的范式的范式环境。