Slowly changing variables in a continuous state space constitute an important category of reinforcement learning and see its application in many domains, such as modeling a climate control system where temperature, humidity, etc. change slowly over time. However, this subject is less addressed in recent studies. Classical methods with certain variants, such as Dynamic Programming with Tile Coding which discretizes the state space, fail to handle slowly changing variables because those methods cannot capture the tiny changes in each transition step, as it is computationally expensive or impossible to establish an extremely granular grid system. In this paper, we introduce a Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP captures in each transition step the state's partial "penetration" into its neighboring hyper-tiles in the gridded hyperspace, thus does not require the transition to be inter-tile in order for the change to be captured. Therefore, HNP allows for a very coarse grid system, which makes the computation feasible. HNP assumes near linearity of the transition function in a local space, which is commonly satisfied. In summary, HNP can be orders of magnitude more efficient than classical method in handling slowly changing variables in reinforcement learning. We have made an industrial implementation of NHP with a great success.
翻译:在连续状态空间缓慢变化的变量构成强化学习的一个重要类别,并看到其在很多领域的应用,例如模拟气候控制系统,温度、湿度等随时间而缓慢变化。然而,最近的研究较少涉及这个主题。古老的方法,包括某些变体,例如将国家空间分解的电线编码系统,无法处理缓慢变化的变量,因为这些方法无法捕捉每个过渡步骤的微小变化,因为计算成本昂贵或不可能建立一个极其颗粒的电网系统。在本文中,我们采用超空间邻里堡网格(HNP)方法来解决问题。在每次过渡中,国家警察都捕捉到国家部分“穿透”进入电网化超空间的相邻超长平台,因此,不要求转换为相互交错的变量,以便捕捉到变化所需的变化。因此,国家警察允许非常粗糙的电网格系统进行计算。国家警察认为,在本地空间的过渡功能几乎是线性,这通常可以满足。国家警察在每次过渡中都捕捉到。国家警察在每一个过渡阶段都采取部分“穿透式”的“穿透式”步骤,其部分“穿透式“穿透式”步骤,我们学会在缓慢的变动中学习中学习中,在工业成功成功中,可以有较慢地步步步步步步步中学习。我们学习。