Sample efficiency and risk-awareness are central to the development of practical reinforcement learning (RL) for complex decision-making. The former can be addressed by transfer learning and the latter by optimizing some utility function of the return. However, the problem of transferring skills in a risk-aware manner is not well-understood. In this paper, we address the problem of risk-aware policy transfer between tasks in a common domain that differ only in their reward functions, in which risk is measured by the variance of reward streams. Our approach begins by extending the idea of generalized policy improvement to maximize entropic utilities, thus extending policy improvement via dynamic programming to sets of policies and levels of risk-aversion. Next, we extend the idea of successor features (SF), a value function representation that decouples the environment dynamics from the rewards, to capture the variance of returns. Our resulting risk-aware successor features (RaSF) integrate seamlessly within the RL framework, inherit the superior task generalization ability of SFs, and incorporate risk-awareness into the decision-making. Experiments on a discrete navigation domain and control of a simulated robotic arm demonstrate the ability of RaSFs to outperform alternative methods including SFs, when taking the risk of the learned policies into account.
翻译:抽样效率和风险意识是发展实际强化学习(RL)以进行复杂决策的核心,前者可以通过转让学习解决,后者可以通过优化返回的某种效用功能来解决,然而,以风险意识方式转让技能的问题并不十分清楚。在本文件中,我们处理共同领域的任务之间风险意识政策转移问题,而共同领域的任务只是其奖励职能的不同,其风险是通过奖励流的差异来衡量的。我们的方法是扩大普遍政策改进的理念,以最大限度地实现温热带公用事业,从而通过动态规划将政策改进扩展至整套政策和风险转化水平。接下来,我们扩展了后续功能(SF)的概念,即将环境动态与回报相分离的一种价值功能代表,以捕捉回报的差异。我们由此产生的风险意识后继功能(RASF)在RL框架内无缝地整合,继承SF的高级任务总体能力,并将风险意识纳入决策。在模拟机器人结构的演化过程中,将模拟机器人手臂的控制的替代导航域与控制能力,包括SFRSF的演算到风险核算中。