This work describes the principled design of a theoretical framework leading to fast and accurate algorithmic information measures on finite multisets of finite strings by means of compression. One distinctive feature of our approach is to manipulate {\em reified}, explicit representations of the very entities and quantities of the theory itself: compressed strings, models, rate-distortion states, minimal sufficient models, joint and relative complexity. To do so, a programmable, recursive data structure called a {\em parselet} essentially provides modeling of a string as a concatenation of parameterized instantiations from sets of finite strings that encode the regular part of the data. This supports another distinctive feature of this work, which is the native embodiment of Epicurus' Principle on top of Occam's Razor, so as to produce both a most-significant and most-general explicit model for the data. This model is iteratively evolved through the Principle of Minimal Change to reach the so-called minimal sufficient model of the data. Parselets may also be used to compute a compression score to any arbitrary hypothesis about the data. A lossless, rate-distortion oriented, compressed representation is proposed, that allows immediate reusability of the costly computations stored on disk for their fast merging as our core routine for information calculus. Two information measures are deduced: one is exact because it is purely combinatorial, and the other may occasionally incur slight numerical inaccuracies because it is an approximation of the Kolmogorov complexity of the minimal sufficient model. Symmetry of information is enforced at the bit level. Whenever possible, parselets are compared with off-the-shelf compressors on real data. Some other applications just get enabled by parselets.
翻译:暂无翻译