The Markov assumption in Markov Decision Processes (MDPs) is fundamental in reinforcement learning, influencing both theoretical research and practical applications. Existing methods that rely on the Bellman equation benefit tremendously from this assumption for policy evaluation and inference. Testing the Markov assumption or selecting the appropriate order is important for further analysis. Existing tests primarily utilize sequential hypothesis testing methodology, increasing the tested order if the previously-tested one is rejected. However, This methodology cumulates type-I and type-II errors in sequential testing procedures that cause inconsistent order estimation, even with large sample sizes. To tackle this challenge, we develop a procedure that consistently distinguishes the true order from others. We first propose a novel estimator that equivalently represents any order Markov assumption. Based on this estimator, we thus construct a signal function and an associated signal statistic to achieve estimation consistency. Additionally, the curve pattern of the signal statistic facilitates easy visualization, assisting the order determination process in practice. Numerical studies validate the efficacy of our approach.
翻译:暂无翻译