A recent body of literature has investigated the effect of data poisoning attacks on data-driven control methods. Data poisoning attacks are well-known to the Machine Learning community, which, however, make use of assumptions, such as cross-sample independence, that in general do not hold for dynamical systems. As a consequence, attacks, and detection methods, operate differently from the i.i.d. setting studied in classical supervised problems. In particular, data poisoning attacks against data-driven control methods can be fundamentally seen as changing the behavior of the dynamical system described by the data. In this work, we study this phenomenon through the lens of statistical testing, and verify the detectability of different attacks for a linear dynamical system. On the basis of the arguments hereby presented, we propose a stealthy data poisoning attack that can escape classical detection tests, and conclude by showing the efficiency of the proposed attack.
翻译:最近有一批文献对数据中毒攻击对数据驱动控制方法的影响进行了调查,数据中毒攻击是机器学习界所熟知的,但是,它利用了跨抽样独立等假设,一般而言,这种假设对动态系统并不有效,因此,攻击和探测方法不同于在古典监督下研究的i.i.d.设置,特别是,对数据驱动控制方法的数据中毒攻击可从根本上被视为改变数据描述的动态系统的行为。在这项工作中,我们通过统计测试的透镜研究这一现象,并核实线性动态系统不同攻击的可探测性。我们在此提出的论点基础上,提议进行隐性数据中毒攻击,以逃避传统检测测试,并以显示拟议攻击的效率来结束。