We characterize Martin-L\"of randomness and Schnorr randomness in terms of the merging of opinions, along the lines of the Blackwell-Dubins Theorem. After setting up a general framework for defining notions of merging randomness, we focus on finite horizon events, that is, on weak merging in the sense of Kalai-Lehrer. In contrast to Blackwell-Dubins and Kalai-Lehrer, we consider not only the total variational distance but also the Hellinger distance and the Kullback-Leibler divergence. Our main result is a characterization of Martin-L\"of randomness and Schnorr randomness in terms of weak merging and the summable Kullback-Leibler divergence. The main proof idea is that the Kullback-Leibler divergence between $\mu$ and $\nu$, at a given stage of the learning process, is exactly the incremental growth, at that stage, of the predictable process of the Doob decomposition of the $\nu$-submartingale $L(\sigma)=-\ln \frac{\mu(\sigma)}{\nu(\sigma)}$. These characterizations of algorithmic randomness notions in terms of the Kullback-Leibler divergence can be viewed as global analogues of Vovk's theorem on what transpires locally with individual Martin-L\"of $\mu$- and $\nu$-random points and the Hellinger distance between $\mu,\nu$.
翻译:暂无翻译