The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classification. Specifically, we investigate optimization problems starting with Linear Programs (LPs). We start off by demonstrating the shortcoming of a naive mapping between the formalism of adversarial examples and LPs, to then reveal how we can provide the missing piece -- intriguingly, through the Pearlian notion of Causality. Characteristically, we show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that allows for adversarial-style attacks. We provide both the general proof formally alongside existential proofs of such intriguing LP-parameterizations based on SCM for three combinatorial problems, namely Linear Assignment, Shortest Path and a real world problem of energy systems.
翻译:近些年来,对对抗性攻击,特别是对深神经网络的攻击,进行了广泛的研究。我们打算通过这项工作提出和调查这种现象是否具有更一般性的问题,即对抗式在分类之外的攻击。具体地说,我们调查从线性方案(LPs)开始的优化问题。我们首先展示了对抗性例子和LPs形式主义之间天真图绘制的缺陷,然后揭示了我们如何能够提供缺失的碎片 -- -- 有趣的是,通过Pearrian的Causality概念。典型地说,我们展示了结构性Cusal模型(SCM)对随后的LPs优化的直接影响,这最终暴露了在LPs(由SCM所继承的)中纠结的概念,允许对抗性攻击。我们正式地提供了基于SCM的这种引人性的LP参数化的存在证据,以及基于SCM的三个组合问题,即线性任务、最短路径和能源系统的实际世界问题。