A fundamental problem in analysis of complex systems is getting a reliable estimate of entropy of their probability distributions over the state space. This is difficult because unsampled states can contribute substantially to the entropy, while they do not contribute to the Maximum Likelihood estimator of entropy, which replaces probabilities by the observed frequencies. Bayesian estimators overcome this obstacle by introducing a model of the low-probability tail of the probability distribution. Which statistical features of the observed data determine the model of the tail, and hence the output of such estimators, remains unclear. Here we show that well-known entropy estimators for probability distributions on discrete state spaces model the structure of the low probability tail based largely on few statistics of the data: the sample size, the Maximum Likelihood estimate, the number of coincidences among the samples, the dispersion of the coincidences. We derive approximate analytical entropy estimators for undersampled distributions based on these statistics, and we use the results to propose an intuitive understanding of how the Bayesian entropy estimators work.
翻译:在分析复杂系统时,一个根本问题在于能否可靠地估计其在国家空间的概率分布。这很困难,因为未取样的状态可以大大地促进该粒子,尽管它们不会促进以观察到的频率取代概率的概率最大可能性估计器。Bayesian估计器通过引入概率分布的低概率尾部模型克服了这一障碍。观测到的数据的哪些统计特征决定了尾部的模型,从而决定了这种估计器的输出。我们在这里展示了众所周知的离散状态空间概率分布模型的著名概率估计器;低概率尾部的结构主要基于数据很少的统计:样本大小、最大可能性估计、样品的巧合数量、巧合的分散。我们根据这些统计,得出了未得到充分采样的分布模型的大致分析性估计器,我们用结果来提出如何直观了解巴耶斯安天文测量仪是如何工作的。