This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.
翻译:这项工作研究对多变概率预测模型和可行的防御机制进行对抗性攻击的威胁。 我们的研究发现一种新的攻击模式对目标时间序列的预测产生消极影响,即对过去对少数其他时间序列的观察进行战略性的、稀少的(无法察觉的)修改。 为了减轻这种攻击的影响,我们制定了两种防御战略。 首先,我们在分类方面将以前开发的随机平滑技术推广到多变预测情景。 其次,我们开发了一种对抗性培训算法,学会创建对抗性例子,同时优化预测模型,以提高其抵御这种对抗性模拟的稳健性。 有关现实世界数据集的广泛实验证实,我们的攻击计划是强大的,我们的防御算法比基线防御机制更有效。</s>