This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.
翻译:本研究探讨对多元概率预测模型的对抗攻击威胁和可行的防御机制。我们的研究发现了一种新的攻击模式,通过对其他时间序列的过去观察值进行战略性、稀疏(难以察觉)的修改,从而负面影响了目标时间序列的预测。为了减轻这种攻击的影响,我们提出了两种防御策略。首先,我们将之前分类中开发的随机平滑方法扩展到多元预测场景中。其次,我们开发了一种对抗训练算法,该算法学习创建对抗性示例,同时优化预测模型以提高其对此类对抗攻击的鲁棒性。对真实世界数据集的大量实验证明了我们的攻击方案的强大性,以及与基线防御机制相比,我们的防御算法更加有效。