Can we use machine learning to compress graph data? The absence of ordering in graphs poses a significant challenge to conventional compression algorithms, limiting their attainable gains as well as their ability to discover relevant patterns. On the other hand, most graph compression approaches rely on domain-dependent handcrafted representations and cannot adapt to different underlying graph distributions. This work aims to establish the necessary principles a lossless graph compression method should follow to approach the entropy storage lower bound. Instead of making rigid assumptions about the graph distribution, we formulate the compressor as a probabilistic model that can be learned from data and generalise to unseen instances. Our "Partition and Code" framework entails three steps: first, a partitioning algorithm decomposes the graph into subgraphs, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits. All the components (partitioning, dictionary and distribution) are parametric and can be trained with gradient descent. We theoretically compare the compression quality of several graph encodings and prove, under mild conditions, that PnC achieves compression gains that grow either linearly or quadratically with the number of vertices. Empirically, PnC yields significant compression improvements on diverse real-world networks.
翻译:我们能否用机器学习压缩图形数据? 图表中没有定序对常规压缩算法提出重大挑战,限制其可实现的收益以及发现相关模式的能力。 另一方面,大多数图形压缩方法依靠依赖域的手工制作的表达方式,无法适应不同的底图分布。 这项工作的目的是建立必要的原则, 一个无损图形压缩方法, 以便接近酶储存的下限。 我们不是对图表分布作出僵硬的假设, 而是将压缩算法设计成一种从数据和概括到不可见实例的概率模型。 我们的“ 部分和代码” 框架需要三个步骤: 首先, 分区算法将图表转换成子图, 然后这些方法被映入一个小字典的元素, 我们学习了概率分布, 最后, 一个昆虫编码将表示转换成比特。 所有组件( 部分、 字典和分布) 都具有分数, 可以用梯度来训练。 我们从理论上比较了数个图形编码的压缩质量, 并证明, 在温和条件下, PnC 将图解压缩网络的精度压缩结果与Q- 平质压缩网络的增成。