The traditional statistical inference is static, in the sense that the estimate of the quantity of interest does not affect the future evolution of the quantity. In some sequential estimation problems however, the future values of the quantity to be estimated depend on the estimate of its current value. This type of estimation problems has been formulated as the dynamic inference problem. In this work, we formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn according to a random model parameter. We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss. Moreover, learning for dynamic inference can serve as a meta problem, such that all familiar machine learning problems, including supervised learning, imitation learning and reinforcement learning, can be cast as its special cases or variants. Gaining a good understanding of this unifying meta problem thus sheds light on a broad spectrum of machine learning problems as well.
翻译:传统的统计推论是静态的,即对利息数量的估计并不影响该数量的未来演变;然而,在某些顺序估算问题中,估计数量的未来价值取决于对当前价值的估计;这类估计问题被确定为动态推论问题;在这项工作中,我们根据动态推论来拟订贝叶斯学问题,其中假设根据随机模型参数随机抽取未知数量生成模型;我们得出最佳的巴伊西亚离线和在线学习规则,以尽量减少推论损失;此外,对动态推论的学习可以作为一个元问题,例如所有熟悉的机器学习问题,包括监督学习、模仿学习和强化学习,都可以作为特殊案例或变体,很好地了解这一统一的元问题,从而也为广泛的机器学习问题开明。