In recent years there has been a lot of focus on adversarial attacks, especially on deep neural networks. Here, we argue that they are more general in nature and can easily affect a larger class of models, e.g., any differentiable perturbed optimizers. We further show that such attacks can be determined by the hidden confounders in a domain, thus drawing a novel connection between such attacks and causality. Establishing this causal perspective is characterized by the influence of the structural causal model's data generating process on the subsequent optimization thereby exhibiting intriguing parameters of the former. We reveal the existence of such parameters for three combinatorial optimization problems, namely linear assignment, shortest path and a real world problem of energy systems. Our empirical examination also unveils worrisome consequences of these attacks on differentiable perturbed optimizers thereby highlighting the criticality of our findings.
翻译:近些年来,对对抗性攻击,特别是对深神经网络的对抗性攻击,有许多关注焦点。在这里,我们争辩说,这些攻击的性质比较一般,很容易影响到更大规模的模型,例如任何不同的扰动优化器。我们进一步表明,这种攻击可以由某一领域隐藏的困惑者确定,从而在这种攻击和因果关系之间建立了新的联系。建立这种因果关系的特点是结构性因果数据生成过程对随后的优化过程的影响,从而展示了前者的引人入胜的参数。我们揭示了三种组合优化问题的参数的存在,即线性分配、最短路径和真正的能源系统世界问题。我们的经验审查还揭示了这些攻击对不同扰动优化器的令人担忧的后果,从而突出了我们发现的关键性。