Performative predictions are forecasts which influence the outcomes they aim to predict, undermining the existence of correct forecasts and standard methods of elicitation and estimation. We show that conditioning forecasts on covariates that separate them from the outcome renders the target distribution forecast-invariant, guaranteeing well-posedness of the forecasting problem. However, even under this condition, classical proper scoring rules fail to elicit correct forecasts. We prove a general impossibility result and identify two solutions: (i) in decision-theoretic settings, elicitation of correct and incentive-compatible forecasts is possible if forecasts are separating; (ii) scoring with unbiased estimates of the divergence between the forecast and the induced distribution of the target variable yields correct forecasts. Applying these insights to parameter estimation, conditional forecasts and proper scoring rules enable performatively stable estimation of performatively correct parameters, resolving the issues raised by Perdomo et al. (2020). Our results expose fundamental limits of classical forecast evaluation and offer new tools for reliable and accurate forecasting in performative settings.
翻译:执行性预测是指那些会影响其所预测结果的预测,这破坏了正确预测的存在性以及标准的启发与估计方法。我们证明,将预测条件化于那些能将其与结果分离的协变量上,可使目标分布具有预测不变性,从而保证预测问题的适定性。然而,即使在此条件下,经典的适当评分规则也无法启发正确的预测。我们证明了一个普遍的不可能性结果,并提出了两种解决方案:(i) 在决策论设定中,若预测是分离的,则启发正确且激励相容的预测是可能的;(ii) 使用预测与目标变量诱导分布之间偏差的无偏估计进行评分,可得到正确的预测。将这些见解应用于参数估计,条件预测与适当评分规则能够实现对执行性正确参数的执行性稳定估计,从而解决了 Perdomo 等人 (2020) 提出的问题。我们的结果揭示了经典预测评估的根本局限性,并为执行性场景下的可靠且准确预测提供了新工具。