We study a quantity called discrete layered entropy, which approximates the Shannon entropy within a logarithmic gap. Compared to the Shannon entropy, the discrete layered entropy is piecewise linear, approximates the expected length of the optimal one-to-one non-prefix code, and satisfies an elegant conditioning property. These properties make it useful for approximating the Shannon entropy in linear programming and maximum entropy problems, studying the optimal length of conditional encoding, and bounding the entropy of monotonic mixture distributions. In particular, it can give a bound $I(X;Y)+\log(I(X;Y)+3.4)+1$ for the strong functional representation lemma that significantly improves upon the best known bound.
翻译:暂无翻译