Graph Drawing techniques have been developed in the last few years with the purpose of producing aesthetically pleasing node-link layouts. Recently, the employment of differentiable loss functions has paved the road to the massive usage of Gradient Descent and related optimization algorithms. In this paper, we propose a novel framework for the development of Graph Neural Drawers (GND), machines that rely on neural computation for constructing efficient and complex maps. GND are Graph Neural Networks (GNNs) whose learning process can be driven by any provided loss function, such as the ones commonly employed in Graph Drawing. Moreover, we prove that this mechanism can be guided by loss functions computed by means of Feedforward Neural Networks, on the basis of supervision hints that express beauty properties, like the minimization of crossing edges. In this context, we show that GNNs can nicely be enriched by positional features to deal also with unlabelled vertexes. We provide a proof-of-concept by constructing a loss function for the edge-crossing and provide quantitative and qualitative comparisons among different GNN models working under the proposed framework.
翻译:在过去几年里,已经开发了图表绘制技术,目的是产生美观上令人愉快的节点链接布局。最近,不同损失功能的使用为大规模使用梯度下层及相关优化算法铺平了道路。在本文件中,我们提出了开发图形神经抽屉(GND)的新框架,这些机器依赖神经计算来构建高效和复杂的地图。GND是图形神经网络(GNNs),其学习过程可以由任何规定的损失功能驱动,例如图表绘制中常用的功能。此外,我们证明,这一机制可以采用由Feedforward神经网络计算的损失功能为指导,根据显示美丽特性的监督提示来计算损失功能,如尽量减少跨越边缘。在这方面,我们表明,GNNs可以通过定位特征进行精美的充实,从而也能够处理无标签的脊椎。我们通过为边缘交叉功能构建损失功能并提供定量和定性比较,对在拟议框架下运行的不同GNN模型进行定量和定性比较,从而提供证据。