Reinforcement Learning with Verifiable Rewards (RLVR) has been an effective approach for improving Large Language Models' reasoning in domains such as coding and mathematics. Here, we apply RLVR methods towards forecasting future real-world events - a challenging task for RL due to the very noisy (and delayed) outcomes involved. Using a novel dataset of recent questions from a prediction market, and accompanying relevant news headlines, we show that a compact (14B) reasoning model can be trained to match or surpass the predictive accuracy of frontier models like o1, while greatly improving probabilistic calibration. The model's performance is also practically meaningful: in a Polymarket trading simulation, we estimate that its bets would have yielded a return on investment of over 10% across all questions in the test set. We detail and compare approaches used in training our model, including augmenting our training-data with synthetic prediction questions, guardrails for learning stability, and median prediction sampling at inference-time.
翻译:可验证奖励的强化学习(RLVR)在提升大型语言模型于编程和数学等领域的推理能力方面已被证明是一种有效方法。本文中,我们将RLVR方法应用于预测未来真实世界事件——这对强化学习而言是一项极具挑战性的任务,因为所涉及的结果具有高度噪声性(且存在延迟)。利用一个包含近期预测市场问题及其相关新闻标题的新颖数据集,我们证明,一个紧凑的(14B)推理模型经过训练后,其预测准确性能够达到或超越如o1等前沿模型的水平,同时显著改善了概率校准效果。该模型的性能也具有实际意义:在Polymarket交易模拟中,我们估计其投注策略在测试集所有问题上可实现超过10%的投资回报率。我们详细阐述并比较了训练模型所采用的方法,包括使用合成预测问题扩充训练数据、确保学习稳定性的防护机制,以及在推理时采用中位数预测采样。