Traffic state prediction is necessary for many Intelligent Transportation Systems applications. Recent developments of the topic have focused on network-wide, multi-step prediction, where state of the art performance is achieved via deep learning models, in particular, graph neural network-based models. While the prediction accuracy of deep learning models is high, these models' robustness has raised many safety concerns, given that imperceptible perturbations added to input can substantially degrade the model performance. In this work, we propose an adversarial attack framework by treating the prediction model as a black-box, i.e., assuming no knowledge of the model architecture, training data, and (hyper)parameters. However, we assume that the adversary can oracle the prediction model with any input and obtain corresponding output. Next, the adversary can train a substitute model using input-output pairs and generate adversarial signals based on the substitute model. To test the attack effectiveness, two state of the art, graph neural network-based models (GCGRNN and DCRNN) are examined. As a result, the adversary can degrade the target model's prediction accuracy up to $54\%$. In comparison, two conventional statistical models (linear regression and historical average) are also examined. While these two models do not produce high prediction accuracy, they are either influenced negligibly (less than $3\%$) or are immune to the adversary's attack.
翻译:许多智能运输系统应用都需要对交通流量进行预测。最近,该主题的发展侧重于全网络多步预测,通过深学习模型,特别是图形神经网络模型,实现最新状态。虽然深学习模型的预测准确性很高,但这些模型的稳健性引起了许多安全关切,因为输入中添加的可感知的扰动可大大降低模型性能。在这项工作中,我们提议了一个对抗攻击框架,将预测模型作为黑盒,即假设对模型结构、培训数据和(机能)参数一无所知。然而,我们假设对手能够用任何投入或相应的输出来压缩预测模型。接下来,敌人可以用投入-投入配对来训练替代模型,并根据替代模型生成对抗信号。测试攻击效果,我们研究了两种状态,即以图形-神经网络为基础的模型(GCGNNN和DCRNNN)作为黑盒。结果,将目标模型的预测准确性降低到任何预测值,然后获得相应的输出输出结果。另外,两个常规的模型是历史回归率,这些模型是不可估测的。