We study the problem of sequentially predicting properties of a probabilistic model and its next outcome over an infinite horizon, with the goal of ensuring that the predictions incur only finitely many errors with probability 1. We introduce a general framework that models such prediction problems, provide general characterizations for the existence of successful prediction rules, and demonstrate the application of these characterizations through several concrete problem setups, including hypothesis testing, online learning, and risk domination. In particular, our characterizations allow us to recover the findings of Dembo and Peres (1994) with simple and elementary proofs, and offer a partial resolution to an open problem posed therein.
翻译:暂无翻译