We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to adversarial classification models proposed earlier and to maximum-margin classifiers. We also provide a reformulation of the distributionally robust model for linear classification, and show it is equivalent to minimizing a regularized ramp loss objective. Numerical experiments show that, despite the nonconvexity of this formulation, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain class of distributions, the only stationary point of the regularized ramp loss minimization problem is the global minimizer.
翻译:我们研究了基于分布稳健机会限制的对抗性分类模式。我们表明,在瓦塞斯坦模糊不清的情况下,该模式旨在将距离偏差的有条件值风险降到最低,我们探讨了与先前提出的对抗性分类模式和最大边际分类方法的联系。我们还对分布稳健的线性分类模式进行了重新拟订,并表明这相当于最大限度地减少正常坡坡损失目标。数字实验表明,尽管这一表述方法不协调,但标准下限方法似乎会与这一问题的全球最小化方法趋同。我们从这一观察中可以看出,对于某些类别的分配而言,固定点是固定的固定坡脚损失最小化问题。