We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 \leq k \leq O(\sqrt{n})$, using techniques from universal compression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $\Theta(\frac{k^2}{n}\log \frac{n}{k^2})$, in contrast to the optimal rate of $\Theta(\frac{\log \log n}{n})$ for $k=2$ previously shown in Falahatgar et al. (2016). These rates, slower than the parametric rate of $O(\frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(\frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters. Extensions to higher-order Markov chains are also obtained.
翻译:我们用依赖数据来研究以下学习问题:从固定的Markov链中观测一个以美元为单位的美元长度轨道,目标是预测下一个状态。对于3 leq k k\leq O(\ sqrt{n})$3 leq k\leq O(\ sqrt{n}) 美元,使用通用压缩技术, Kullback- Leiber 差异的最佳预测风险显示为$Theta(\ frac{k ⁇ 2 ⁇ ⁇ log\ frac{n ⁇ k ⁇ 2} $,与以前在Falalahatgar 和 al.() 显示的 3 k=2美元的最佳价格相比,对于3 leqkk kv 美元来说,这些比率比$O(\\ refrac{k ⁇ 2 ⁇ n}的参数率要慢,可以归因于数据的记忆力,因为Markov 链的光谱差距可以任意缩小。为了量化记忆效果,我们研究的是无法再复制的链条, 与规定的光谱差距之间的数字是不可复制的。此外,除了确定最佳预测风险的模型外,我们展示的是,光谱的模型中的最佳预测风险是两种。