Data deduplication saves storage space by identifying and removing repeats in the data stream. Compared with traditional compression methods, data deduplication schemes are more time efficient and are thus widely used in large scale storage systems. In this paper, we provide an information-theoretic analysis on the performance of deduplication algorithms on data streams in which repeats are not exact. We introduce a source model in which probabilistic substitutions are considered. More precisely, each symbol in a repeated string is substituted with a given edit probability. Deduplication algorithms in both the fixed-length scheme and the variable-length scheme are studied. The fixed-length deduplication algorithm is shown to be unsuitable for the proposed source model as it does not take into account the edit probability. Two modifications are proposed and shown to have performances within a constant factor of optimal with the knowledge of source model parameters. We also study the conventional variable-length deduplication algorithm and show that as source entropy becomes smaller, the size of the compressed string vanishes relative to the length of the uncompressed string, leading to high compression ratios.
翻译:数据解析通过辨别和删除数据流中的重复来节省存储空间。 与传统的压缩方法相比, 数据解析方案更具有时间效率, 因而在大型储存系统中广泛使用。 在本文中, 我们对数据流中的重复算法的性能进行了信息理论分析, 而重复并不准确。 我们引入了一个参考概率替代的源模型。 更准确地说, 重复字符串中的每个符号都被替换为给定的编辑概率。 正在研究固定长度方案和可变长度方案中的解析算法。 固定长度解算法被显示不适合拟议的源模型, 因为它没有考虑到编辑概率 。 提出两项修改, 并显示其性能与源模型参数知识的常数最佳性系数一致。 我们还研究传统的变量解析算法, 并显示作为源的酶变小, 压缩字符串的消散大小与未压缩字符串长度相对, 导致高压缩比率。