The problem of aggregating expert forecasts is ubiquitous in fields as wide-ranging as machine learning, economics, climate science, and national security. Despite this, our theoretical understanding of this question is fairly shallow. This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures. While in full generality it is impossible to achieve a nontrivial performance guarantee, we show that doing so is possible under a condition on the experts' information structure that we call \emph{projective substitutes}. The projective substitutes condition is a notion of informational substitutes: that there are diminishing marginal returns to learning the experts' signals. We show that under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert. We then consider a more permissive setting, in which the aggregator has access to the prior. We show that by averaging the experts' forecasts and then \emph{extremizing} the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior. Our results give a theoretical grounding to past empirical research on extremization and help give guidance on the appropriate amount to extremize.
翻译:综合专家预测的问题在机器学习、经济学、气候科学和国家安全等广泛领域普遍存在。尽管如此,我们对该问题的理论理解相当浅。本文件在专家知识是从广泛的信息结构类别中对立地选择专家知识的情况下,开始研究预测汇总。虽然完全笼统地说,不可能实现非边际绩效保证,但我们表明,在专家信息结构的一个条件下,即我们称之为“推移替代物”的条件下,这样做是可能的。投影替代物是一个信息替代物的概念:在学习专家信号时,我们从理论上看回报减少,学习专家信号的回报减少。我们表明,在投影替代物条件下,专家知识的平均数在随机专家信任战略下有很大改进。我们然后考虑一种比较宽松的环境,即聚合者可以使用前一种非边际的性能保证。我们通过将专家的预测结果进行平均化,然后通过将平均值从前一种不变的因素移开来,从而将平均值降低为信息替代物:在学习专家信号的边际回报减少。我们表明,在投影视专家预测的平均值的基础上,我们以往的理论性研究结果比可能更好。