Gyenis and Redei have demonstrated that any prior p on a finite algebra, however chosen, severely restricts the set of posteriors accessible from p by Jeffrey conditioning on a nontrivial partition. Their demonstration involves showing that the set of posteriors not accessible from p in this way (which they call the Bayes blind spot of p) is large with respect to three common measures of size, namely, having cardinality c, (normalized) Lebesgue measure 1, and Baire second category with respect to a natural topology. In the present paper, we establish analogous results for probability measures defined on any infinite sigma algebra of subsets of a denumerably infinite set. However, we have needed to employ distinctly different approaches to determine the cardinality, and especially, the topological and measure-theoretic sizes of the Bayes blind spot in the infinite case. Interestingly, all of the results that we establish for a single prior p continue to hold for the intersection of the Bayes blind spots of countably many priors. This leads us to conjecture that Bayesian learning itself might be just as culpable as the limitations imposed by priors in enabling the existence of large Bayes blind spots.
翻译:Gyenis 和 Redei 已经表明,无论选择何种定值代数,任何先前关于定值代数的参数,都严重限制了Jeffrey在非边际分割线上从Pefre调制下可以接触的一组后遗物。它们的示范性表明,以这种方式(他们称之为Bayes盲点p)从p 无法从P 获取的后遗物组(他们称之为Bayes盲点p ) 很大,这三种共同大小的测量尺度,即具有基数 c,(标准化) Lebesgue 措施1, 和Baire 与自然地形学有关的第二类。在本文件中,我们为任何无限的无穷无穷无穷的子的Sigma 代数设定了概率计量结果。然而,我们需要使用截然不同的不同的方法来确定基底点,特别是无限的Bayes 盲点的地形和测量理论大小。有趣的是,我们为单个前几个前几处的盲点的交叉点确定的所有结果。这导致我们推测,Bayesian 学习的概率可能使先前的盲点成为盲点的盲点的盲点。