We design a simple reinforcement learning (RL) agent that implements an optimistic version of $Q$-learning and establish through regret analysis that this agent can operate with some level of competence in any environment. While we leverage concepts from the literature on provably efficient RL, we consider a general agent-environment interface and provide a novel agent design and analysis. This level of generality positions our results to inform the design of future agents for operation in complex real environments. We establish that, as time progresses, our agent performs competitively relative to policies that require longer times to evaluate. The time it takes to approach asymptotic performance is polynomial in the complexity of the agent's state representation and the time required to evaluate the best policy that the agent can represent. Notably, there is no dependence on the complexity of the environment. The ultimate per-period performance loss of the agent is bounded by a constant multiple of a measure of distortion introduced by the agent's state representation. This work is the first to establish that an algorithm approaches this asymptotic condition within a tractable time frame.
翻译:我们设计了一个简单的强化学习(RL)代理,实施一个乐观的Q$学习版本,并通过遗憾分析确定该代理可以在任何环境中以某种程度的能力运作。虽然我们利用文献中的概念来说明可实现效率的RL,但我们考虑一种一般代理-环境界面,并提供一种新的代理设计和分析。这种一般程度的结果使我们在设计未来代理在复杂的真实环境中运作时了解情况。我们确定,随着时间推移,我们的代理与需要较长时间来评估的政策相比,具有竞争性。在采用无现性性表现时,该代理国代表国的复杂程度是多元性的,评价该代理国代表国能够代表的最佳政策所需要的时间也是多元的。值得注意的是,对于环境的复杂性没有依赖。该代理国最终的每期性表现损失受该代理国代表所引入的不断多重扭曲措施的约束。这项工作是首先确定一种算法在可追溯的时间框架内处理这一无现性条件的方法。