The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample's position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
翻译:古典 Percepron 算法为学习线性分类器提供了一个简单而优雅的程序。 每一步, 算法都会观察样本的位置和标签, 并在出现错误时相应更新当前预测器。 但是, 当有战略代理人希望被归类为正数并能够以有限数量改变其位置时, 分类者可能无法观察代理人的真实地位, 而是一个代理者假冒位置的位置。 与原始设置不同, 他对位置有完全了解, 在这种情况下, Percepron 算法未能实现它的保证, 我们用预测器在两种解决方案之间永远摇晃动来举例说明, 使错误数量无限制, 即使存在完美的大边线性分类器。 我们的主要贡献是提供经修改的 Perceptron 型算法, 它在战略代理人同时出现 $\ ell_ 2 美元 和 加权 $\ ell_ 1 美元 的操纵成本的情况下, 使错误数量有限。 在我们的基线模型中, 操纵成本( 即一个代理者可以操纵的程度) 被假定。 在最普通的模型中, 我们的模型中, 将这一假设和最终的算算算出一个未知的算法。