We study a quantity called discrete layered entropy, which approximates the Shannon entropy within a logarithmic gap. Compared to the Shannon entropy, the discrete layered entropy is piecewise linear, approximates the expected length of the optimal one-to-one non-prefix-free encoding, and satisfies an elegant conditioning property. These properties make it useful for approximating the Shannon entropy in linear programming, studying the optimal length of conditional encoding, and bounding the entropy of monotonic mixture distributions. In particular, it can give a bound for the strong functional representation lemma that improves upon the best bound (as long as the mutual information is at least 2).
翻译:暂无翻译