We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
翻译:我们考虑的是学习控制问题的最佳门槛政策的问题。 门槛政策通过评估系统状态的一个要素是否超过某个阈值(其价值由系统状态的其他要素决定)来做出控制决策。 我们通过利用临界政策的一个单一属性来证明其政策梯度有一个令人惊讶的简单表达方式。 我们用这一简单表达方式来构建一个脱离政策的行动者-批评算法,以学习最佳门槛政策。 模拟结果表明,我们的政策大大优于其他强化学习算法,因为其开发单质财产的能力。 此外,我们还表明,惠特尔指数(Whittle index)是用于解决一个替代问题的一个强有力的工具,相当于一个最佳门槛政策。 这一观察导致一种简单的算法,通过学习替代问题的最佳门槛政策找到惠特尔指数。 模拟结果表明,我们的算法学习惠特尔指数的速度比最近通过间接手段学习惠特指数的几项研究要快得多。