The interconnectedness and interdependence of modern graphs are growing ever more complex, causing enormous resources for processing, storage, communication, and decision-making of these graphs. In this work, we focus on the task graph sparsification: an edge-reduced graph of a similar structure to the original graph is produced while various user-defined graph metrics are largely preserved. Existing graph sparsification methods are mostly sampling-based, which introduce high computation complexity in general and lack of flexibility for a different reduction objective. We present SparRL, the first generic and effective graph sparsification framework enabled by deep reinforcement learning. SparRL can easily adapt to different reduction goals and promise graph-size-independent complexity. Extensive experiments show that SparRL outperforms all prevailing sparsification methods in producing high-quality sparsified graphs concerning a variety of objectives.
翻译:现代图表的相互联系和相互依存性日益复杂,为这些图表的处理、储存、通信和决策提供了大量资源。在这项工作中,我们把重点放在任务图的填充上:制作了与原始图相似的结构的边缘缩放图,同时大量保留了各种用户定义的图形测量数据。现有图表的缩放方法大多以取样为基础,在一般情况下具有很高的计算复杂性,对不同的减排目标缺乏灵活性。我们介绍了SparRL,这是通过深层强化学习而促成的第一个通用和有效的图形粉刷化框架。SparRL可以很容易地适应不同的减排目标,并有望达到与图表大小相独立的复杂程度。广泛的实验显示,SparRL在制作高质量、与不同目标有关的广度图时,优于所有通用的粉刷方法。