We characterize conditions under which collections of distributions on $\{0,1\}^\mathbb{N}$ admit uniform estimation of their mean. Prior work from Vapnik and Chervonenkis (1971) has focused on uniform convergence using the empirical mean estimator, leading to the principle known as $P-$ Glivenko-Cantelli. We extend this framework by moving beyond the empirical mean estimator and introducing Uniform Mean Estimability, also called $UME-$ learnability, which captures when a collection permits uniform mean estimation by any arbitrary estimator. We work on the space created by the mean vectors of the collection of distributions. For each distribution, the mean vector records the expected value in each coordinate. We show that separability of the mean vectors is a sufficient condition for $UME-$ learnability. However, we show that separability of the mean vectors is not necessary for $UME-$ learnability by constructing a collection of distributions whose mean vectors are non-separable yet $UME-$ learnable using techniques fundamentally different from those used in our separability-based analysis. Finally, we establish that countable unions of $UME-$ learnable collections are also $UME-$ learnable, solving a conjecture posed in Cohen et al. (2025).
翻译:我们刻画了在$\{0,1\}^\mathbb{N}$上的分布族允许对其均值进行一致估计的条件。Vapnik与Chervonenkis(1971)的早期工作聚焦于使用经验均值估计器的一致收敛性,从而引出了被称为$P-$格利文科-坎泰利的原则。我们通过超越经验均值估计器并引入均匀均值可估性(亦称$UME-$可学习性)来扩展这一框架,该性质刻画了一个分布族是否允许通过任意估计器进行一致均值估计。我们在由分布族的均值向量构成的空间上展开工作。对于每个分布,其均值向量记录了各坐标的期望值。我们证明均值向量的可分性是$UME-$可学习性的一个充分条件。然而,我们通过构造一个均值向量不可分但$UME-$可学习的分布族,证明了均值向量的可分性并非$UME-$可学习性的必要条件,该构造所采用的技术与我们基于可分性的分析在本质上不同。最后,我们确立了$UME-$可学习分布族的可数并集同样是$UME-$可学习的,从而解决了Cohen等人(2025)提出的一个猜想。