Strategic classification studies learning in settings where users can modify their features to obtain favorable predictions. Most current works focus on simple classifiers that trigger independent user responses. Here we examine the implications of learning with more elaborate models that break the independence assumption. Motivated by the idea that applications of strategic classification are often social in nature, we focus on \emph{graph neural networks}, which make use of social relations between users to improve predictions. Using a graph for learning introduces inter-user dependencies in prediction; our key point is that strategic users can exploit these to promote their goals. As we show through analysis and simulation, this can work either against the system -- or for it. Based on this, we propose a differentiable framework for strategically-robust learning of graph-based classifiers. Experiments on several real networked datasets demonstrate the utility of our approach.
翻译:在用户可以修改其特征以获得有利的预测的环境下学习战略分类研究。 多数当前工作的重点是触发独立用户反应的简单分类器。 我们在这里研究学习打破独立假设的更精细模型的影响。 受战略分类应用往往具有社会性质这一理念的驱使, 我们把重点放在利用用户之间的社会关系来改进预测。 使用图表来学习用户在预测中的依赖性; 我们的关键点是,战略用户可以利用这些方法来促进其目标。 正如我们通过分析和模拟所显示的那样,这可以对系统 -- -- 或对系统 -- -- 或对系统起作用。 基于这个想法,我们提议了一个不同的框架,用于从战略角度对基于图表的分类师进行滚动学习。 对几个真正的网络数据集的实验显示了我们方法的效用。