Classical reinforcement learning (RL) aims to optimize the expected cumulative rewards. In this work, we consider the RL setting where the goal is to optimize the quantile of the cumulative rewards. We parameterize the policy controlling actions by neural networks and propose a novel policy gradient algorithm called Quantile-Based Policy Optimization (QPO) and its variant Quantile-Based Proximal Policy Optimization (QPPO) to solve deep RL problems with quantile objectives. QPO uses two coupled iterations running at different time scales for simultaneously estimating quantiles and policy parameters and is shown to converge to the global optimal policy under certain conditions. Our numerical results demonstrate that the proposed algorithms outperform the existing baseline algorithms under the quantile criterion.
翻译:经典强化学习(RL) 旨在优化预期累积收益。 在这项工作中, 我们考虑 RL 设置, 目标是优化累积回报的量化。 我们将神经网络的政策控制行动参数化, 并提议一种新的政策梯度算法, 称为量- 基于政策优化( QPO) 及其变量 量- 基于质- 精度政策优化( QPPO ), 以解决与量化目标的深度 RL 问题 。 QPO 使用两个在不同的时间尺度上运行的交错迭代法, 以同时估算量化和政策参数, 并显示在某些条件下与全球最佳政策趋同。 我们的数字结果显示, 拟议的算法比量化标准下的现有基线算法要强得多 。