In many predictive decision-making scenarios, such as credit scoring and academic testing, a decision-maker must construct a model that accounts for agents' propensity to "game" the decision rule by changing their features so as to receive better decisions. Whereas the strategic classification literature has previously assumed that agents' outcomes are not causally affected by their features (and thus that strategic agents' goal is deceiving the decision-maker), we join concurrent work in modeling agents' outcomes as a function of their changeable attributes. As our main contribution, we provide efficient algorithms for learning decision rules that optimize three distinct decision-maker objectives in a realizable linear setting: accurately predicting agents' post-gaming outcomes (prediction risk minimization), incentivizing agents to improve these outcomes (agent outcome maximization), and estimating the coefficients of the true underlying model (parameter estimation). Our algorithms circumvent a hardness result of Miller et al. (2020) by allowing the decision maker to test a sequence of decision rules and observe agents' responses, in effect performing causal interventions through the decision rules.
翻译:在许多预测性决策情景中,例如信用评分和学术测试中,决策者必须建立一个模型,说明代理人“游戏”决策规则的倾向,改变其特征以获得更好的决定。战略分类文献以前曾假设代理人的结果不会因其特征而受到因果影响(因此战略代理人的目标正在欺骗决策者 ), 我们同时将模拟代理人结果的工作作为其可改变属性的函数。作为我们的主要贡献,我们提供了有效的算法,用于学习决策规则,在可实现线性环境中优化三个不同的决策者目标:准确预测代理人的收购后结果(尽量减少风险 ), 激励代理人改进这些结果(尽量扩大剂结果), 估计真正基本模型的系数(参数估计 ) 。 我们的算法绕过米勒等人等人(2020年)的硬性结果,允许决策者测试决策规则的顺序并观察代理人的反应,实际上通过决策规则进行因果关系干预。