We give answer to an argument trying to show the divergence of Assembly Theory from LZ compression. We formally proved that any implementation of the concept of `copy number' underlying Assembly Theory (AT) and its assembly index (Ai) is equivalent to Shannon Entropy and not fundamentally or methodologically different from algorithms like ZIP/PNG via LZ compression. We show that the weak empirical correlation between Ai and LZW, which the authors offered as a defence against the proof that the assembly index calculation method is an LZ scheme, is based on an incomplete and misleading experiment. When the experiment is completed and conducted properly, the asymptotic convergence to LZ compression and Shannon Entropy is evident and aligned with the proof previously offered. Therefore, this completes both the theoretical and empirical demonstrations that any variation of the copy-number concept underlying AT, which resorts to counting the number of object repetitions `to arrive at a measure for life', is equivalent to statistical compression and Shannon Entropy. We demonstrate that the authors' `we-are-better-because-we-are-worse' defence argument against compression does not withstand basic scrutiny, and that their empirical results separating organic from inorganic compounds have not only been previously reported -- sans claims to unify physics and biology -- but are also driven solely by molecular length, nota special feature of life captured by their assembly index. Finally, we show that Ai is a special case of our BDM introduced almost a decade earlier and that arguments attributing special stochastic properties to Ai are misleading, not unique, and exactly the same than those that Shannon Entropy is already not only equipped with but designed for which we have also proven to be equivalent to Ai making AT redundant even in practice when applied to their own experimental data.
翻译:暂无翻译