Reinforcement learning algorithms are typically geared towards optimizing the expected return of an agent. However, in many practical applications, low variance in the return is desired to ensure the reliability of an algorithm. In this paper, we propose on-policy and off-policy actor-critic algorithms that optimize a performance criterion involving both mean and variance in the return. Previous work uses the second moment of return to estimate the variance indirectly. Instead, we use a much simpler recently proposed direct variance estimator which updates the estimates incrementally using temporal difference methods. Using the variance-penalized criterion, we guarantee the convergence of our algorithm to locally optimal policies for finite state action Markov decision processes. We demonstrate the utility of our algorithm in tabular and continuous MuJoCo domains. Our approach not only performs on par with actor-critic and prior variance-penalization baselines in terms of expected return, but also generates trajectories which have lower variance in the return.
翻译:强化学习算法通常以优化代理商的预期回报为主。 但是,在许多实际应用中, 回报率的低差异是为了确保算法的可靠性。 在本文中, 我们提议了在政策上和非政策上的行为方- 批评算法, 优化一个包含平均回报率和差异的性能标准。 先前的工作利用返回的第二个时刻间接估计差异。 相反, 我们使用最近提出的一个简单得多的直接差异估计算法, 使用时间差异法对估计数进行增量更新。 使用差异化标准, 我们保证我们的算法与本地的限定状态行动马尔科夫决定程序的最佳政策趋同。 我们用表格和连续的 MuJoCo 域展示了我们的算法的实用性。 我们的方法不仅在预期回报率方面与行为方- 批评和先前的差异性基准相匹配, 我们还生成了回报率差异较低的轨迹。