Deep learning has recently demonstrated its promising performance for vision-based parking-slot detection. However, very few existing methods explicitly take into account learning the link information of the marking-points, resulting in complex post-processing and erroneous detection. In this paper, we propose an attentional graph neural network based parking-slot detection method, which refers the marking-points in an around-view image as graph-structured data and utilize graph neural network to aggregate the neighboring information between marking-points. Without any manually designed post-processing, the proposed method is end-to-end trainable. Extensive experiments have been conducted on public benchmark dataset, where the proposed method achieves state-of-the-art accuracy. Code is publicly available at \url{https://github.com/Jiaolong/gcn-parking-slot}.
翻译:最近,深入的学习展示了在基于愿景的停车场探测方面的有希望的成绩,然而,现有的方法很少明确考虑到标记点的链接信息,从而导致复杂的处理后和错误的探测。在本文件中,我们建议采用以停车点探测为主的焦距图形神经网络探测方法,将环视图像中的标记点称为图形结构化数据,并利用图形神经网络将标记点与标记点之间的相邻信息汇总在一起。在没有任何人工设计的后处理方法的情况下,拟议的方法是端到端可训练的。在公共基准数据集方面进行了广泛的实验,拟议的方法达到了最新准确性。代码可在以下网站公开查阅:https://github.com/Jiaolong/gcn-parking-slot}。