The goal of this technical note is to introduce a new finite-time convergence analysis of temporal difference (TD) learning based on stochastic linear system models. TD-learning is a fundamental reinforcement learning (RL) to evaluate a given policy by estimating the corresponding value function for a Markov decision process. While there has been a series of successful works in theoretical analysis of TDlearning, it was not until recently that researchers found some guarantees on its statistical efficiency by developing finite-time error bounds. In this paper, we propose a simple control theoretic finite-time analysis of TD-learning, which exploits linear system models and standard notions in linear system communities. The proposed work provides new simple templets for RL analysis, and additional insights on TD-learning and RL based on ideas in control theory.
翻译:本技术说明的目的是根据随机线性系统模型对时间差异(TD)学习进行新的有限时间趋同分析。TD-学习是一项基本的强化学习(RL),目的是通过估计Markov决策过程的相应价值功能来评价某项特定政策。虽然在TD-学习的理论分析方面有一系列成功的著作,但直到最近研究人员才通过开发有限时间误差界限而发现其统计效率的一些保障。在本文中,我们提议对TD-学习进行简单的理论性有限时间分析,利用线性系统社区的线性系统模型和标准概念。拟议的工作为RL分析提供了新的简单寺庙,并根据控制理论的想法对TD-学习和RL进行了更多的洞察。