This paper follows cognitive studies to investigate a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges. The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks, and achieves accuracy and robustness comparable to the state-of-the-art against translation and rotation attacks, as well as stronger attacks on graph vertices and topologies, i.e., modifications and addition of strokes, all without resorting to adversarial training. Prior studies on sketches, e.g., graph transformers, encode control points of stroke on vertices, which are not invariant to spatial transformations. In contrary, we encode vertices and edges using pairwise distances among control points to achieve invariance. Compared with existing generative sketch model for one-shot classification, our method does not rely on run-time statistical inference. Lastly, the proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.
翻译:本文遵循认知研究,以调查草图图的图示说明,将中风的信息,即草图的部分内容,编码在边缘的脊椎和中点间线信息上。由此产生的图示说明有助于对图表神经网络进行分类任务的培训,并实现精确和稳健的比对翻译和旋转攻击的先进技术,以及更强烈地攻击图形脊椎和地形,即调整和增加划线,所有这些都不诉诸对抗性训练。先前的草图研究,例如,图表变换器,将划线控制点编码在脊椎上,这与空间变异无关。相反,我们用控制点之间的对齐距离来编码脊椎和边缘,以便实现逆差。与现有的一线分类的基因化草图模型相比,我们的方法并不依赖于实时的统计推论。最后,拟议的图解使得能够生成与现有数据结构相似的新型草图。