We propose defensive tensorization, an adversarial defence technique that leverages a latent high-order factorization of the network. The layers of a network are first expressed as factorized tensor layers. Tensor dropout is then applied in the latent subspace, therefore resulting in dense reconstructed weights, without the sparsity or perturbations typically induced by the randomization.Our approach can be readily integrated with any arbitrary neural architecture and combined with techniques like adversarial training. We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks. We validate the versatility of our approach across domains and low-precision architectures by considering an audio classification task and binary networks. In all cases, we demonstrate improved performance compared to prior works.
翻译:我们提议采用防御性推力,即防御性推力技术,利用网络潜在的高顺序因素化。网络的层层首先表现为计数式高压层。然后,在潜伏的子空间中应用电离层,从而产生密集的重修重量,而没有随机化通常引起的孔隙或扰动。我们的方法可以很容易地与任何任意神经结构结合,并与对抗性培训等技术相结合。我们从经验上证明了我们在标准图像分类基准方面的做法的有效性。我们通过考虑音频分类任务和二进制网络,验证了我们的方法在各个领域和低精度结构中的多功能性。我们在所有情况下都展示了与以往工程相比的绩效。