Graph Drawing techniques have been developed in the last few years with the purpose of producing aesthetically pleasing node-link layouts. Recently, the employment of differentiable loss functions has paved the road to the massive usage of Gradient Descent and related optimization algorithms. In this paper, we propose a novel framework for the development of Graph Neural Drawers (GND), machines that rely on neural computation for constructing efficient and complex maps. GNDs are Graph Neural Networks (GNNs) whose learning process can be driven by any provided loss function, such as the ones commonly employed in Graph Drawing. Moreover, we prove that this mechanism can be guided by loss functions computed by means of Feedforward Neural Networks, on the basis of supervision hints that express beauty properties, like the minimization of crossing edges. In this context, we show that GNNs can nicely be enriched by positional features to deal also with unlabelled vertexes. We provide a proof-of-concept by constructing a loss function for the edge-crossing and provide quantitative and qualitative comparisons among different GNN models working under the proposed framework.
翻译:过去几年来,开发了图表绘制技术,目的是制作美观上令人愉快的节点链接布局。最近,不同损失功能的使用为大规模使用梯度下层及相关优化算法铺平了道路。在本文中,我们提出了开发图形神经抽屉(GND)的新框架,这些机器依赖神经计算来构建高效和复杂的地图。GNDs是图形神经网络(GNNs),其学习过程可以由任何规定的损失功能驱动,如图绘制中常用的功能。此外,我们证明,这一机制可以通过以Feedforward神经网络手段计算的损失功能为指导,在监督提示的基础上进行损失功能的计算,显示美性,如尽可能缩小跨越边缘。在这方面,我们表明GNNs可以通过定位特征得到精美的充实,从而也能够处理无标签的脊椎。我们通过为边界构建损失功能并提供在拟议框架下运行的不同GNN模型之间的定量和定性比较,提供了一种验证概念。