The problem of estimating an unknown discrete distribution from its samples is a fundamental tenet of statistical learning. Over the past decade, it attracted significant research effort and has been solved for a variety of divergence measures. Surprisingly, an equally important problem, estimating an unknown Markov chain from its samples, is still far from understood. We consider two problems related to the min-max risk (expected loss) of estimating an unknown $k$-state Markov chain from its $n$ sequential samples: predicting the conditional distribution of the next sample with respect to the KL-divergence, and estimating the transition matrix with respect to a natural loss induced by KL or a more general $f$-divergence measure. For the first measure, we determine the min-max prediction risk to within a linear factor in the alphabet size, showing it is $\Omega(k\log\log n\ / n)$ and $\mathcal{O}(k^2\log\log n\ / n)$. For the second, if the transition probabilities can be arbitrarily small, then only trivial uniform risk upper bounds can be derived. We therefore consider transition probabilities that are bounded away from zero, and resolve the problem for essentially all sufficiently smooth $f$-divergences, including KL-, $L_2$-, Chi-squared, Hellinger, and Alpha-divergences.
翻译:从样本中估算未知的离散分布是一个基本的统计学习原则问题。 在过去的十年中,它吸引了大量的研究努力,并为各种差异度量措施解决了。令人惊讶的是,一个同样重要的问题,即从样本中估算未知的Markov链条,还远没有被理解。我们考虑两个问题,即从其连续的样本中估算一个未知的美元-状态Markov链的最小最大风险(预期损失 ) : 预测下一个样本相对于KL- 波动的有条件分布; 估计KL 或更一般的美元- 波动度量量度测量导致的自然损失的过渡矩阵。 对于第一个计量,我们确定最小- 最大预测风险在字母大小线性系数之内, 表明是 $(k\log\log\ n/ n) 和 $\ mathcal{O} (k2\log\log\log n\\ n/ n) 。 第二,如果转换的概率可以任意的, 包括任意的较小, 那么只有最小的数值- 稳定度, 和 最低的高度的高度的高度的风险 。