Empirical evaluation of deep learning models against adversarial attacks entails solving nontrivial constrained optimization problems. Popular algorithms for solving these constrained problems rely on projected gradient descent (PGD) and require careful tuning of multiple hyperparameters. Moreover, PGD can only handle $\ell_1$, $\ell_2$, and $\ell_\infty$ attack models due to the use of analytical projectors. In this paper, we introduce a novel algorithmic framework that blends a general-purpose constrained-optimization solver PyGRANSO, With Constraint-Folding (PWCF), to add reliability and generality to robustness evaluation. PWCF 1) finds good-quality solutions without the need of delicate hyperparameter tuning, and 2) can handle general attack models, e.g., general $\ell_p$ ($p \geq 0$) and perceptual attacks, which are inaccessible to PGD-based algorithms.
翻译:对对抗性攻击的深层次学习模式进行经验性评估需要解决非三重限制优化问题。 解决这些受限问题的流行算法依赖于预测的梯度下降(PGD),需要仔细调整多个超参数。 此外,由于使用分析投影仪,PGD只能处理$_1美元、$_2美元和$\ ⁇ infty$攻击模型。在本文中,我们引入了一个新的算法框架,将通用限制优化解答器PyGRANSO(PWCF)与约束法(PWCF)混合在一起,以便增加可靠性和通用性强力评估。 PWCF1(PWCF)在不需要微妙的超参数调整的情况下找到高质量的解决方案。 2)可以处理一般攻击模型,例如通用的$\ell_p美元(p\geq 0美元)和感知性攻击,而基于PGD的算法无法使用。