Shannon entropy, a cornerstone of information theory, statistical physics and inference methods, is uniquely identified by the Shannon-Khinchin or Shore-Johnson axioms. Generalizations of Shannon entropy, motivated by the study of non-extensive or non-ergodic systems, relax some of these axioms and lead to entropy families indexed by certain `entropic' parameters. In general, the selection of these parameters requires pre-knowledge of the system or encounters inconsistencies. Here we introduce a simple axiom for any entropy family: namely, that no entropic parameter can be inferred from a completely uninformative (uniform) probability distribution. When applied to the Uffink-Jizba-Korbel and Hanel-Thurner entropies, the axiom selects only R\'enyi entropy as viable. It also extends consistency with the Maximum Likelihood principle, which can then be generalized to estimate the entropic parameter purely from data, as we confirm numerically. Remarkably, in a generalized maximum-entropy framework the axiom implies that the maximized log-likelihood always equals minus Shannon entropy, even if the inferred probability distribution maximizes a generalized entropy and not Shannon's, solving a series of problems encountered in previous approaches.
翻译:香农- Khinchin 或 Shor- John- Johnson axioms 认为,香农- Khinchin 或 Shor- John- Johnxims 是信息理论、统计物理学和推断方法的基石。香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- 香农- / / / / / 香农- 香农- / / / 香农- 香农- 香农- 香农- 香/ / / / / 香、 香/ 香农- 香农- / / / / / / / / / / / / / / / / / / / / /- 香农- / / / 香农- / / 香农- / / / / / / / / / / / / / / / / / / / / / / / / / / /