Strategic classification studies the interaction between a classification rule and the strategic agents it governs. Under the assumption that the classifier is known, rational agents respond to it by manipulating their features. However, in many real-life scenarios of high-stake classification (e.g., credit scoring), the classifier is not revealed to the agents, which leads agents to attempt to learn the classifier and game it too. In this paper we generalize the strategic classification model to such scenarios. We define the price of opacity as the difference in prediction error between opaque and transparent strategy-robust classifiers, characterize it, and give a sufficient condition for this price to be strictly positive, in which case transparency is the recommended policy. Our experiments show how Hardt et al.'s robust classifier is affected by keeping agents in the dark.
翻译:战略分类研究分类规则与其所管辖的战略代理人之间的相互作用。根据分类者已知的假设,理性代理人通过操纵其特征来应对它。然而,在许多高占用分类(例如信用评分)的现实生活中,分类者并没有被披露给代理人,这导致代理人试图学习分类者和游戏它。在本文中,我们将战略分类模式概括为此类假设。我们把不透明性的价格定义为不透明与透明战略-罗盘分类者之间的预测误差,把它定性为它,并给予足够条件使这一价格绝对肯定,在这种情况下,透明度是建议的政策。我们的实验表明,将代理人关在黑暗中会如何影响Hartt等人的强大分类者。