Optimizing economic and public policy is critical to address socioeconomic issues and trade-offs, e.g., improving equality, productivity, or wellness, and poses a complex mechanism design problem. A policy designer needs to consider multiple objectives, policy levers, and behavioral responses from strategic actors who optimize for their individual objectives. Moreover, real-world policies should be explainable and robust to simulation-to-reality gaps, e.g., due to calibration issues. Existing approaches are often limited to a narrow set of policy levers or objectives that are hard to measure, do not yield explicit optimal policies, or do not consider strategic behavior, for example. Hence, it remains challenging to optimize policy in real-world scenarios. Here we show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning (RL) and data-driven simulations. We validate our framework on optimizing the stringency of US state policies and Federal subsidies during a pandemic, e.g., COVID-19, using a simulation fitted to real data. We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes. Their behavior can be explained, e.g., well-performing policies respond strongly to changes in recovery and vaccination rates. They are also robust to calibration errors, e.g., infection rates that are over or underestimated. As of yet, real-world policymaking has not seen adoption of machine learning methods at large, including RL and AI-driven simulations. Our results show the potential of AI to guide policy design and improve social welfare amidst the complexity of the real world.
翻译:优化经济和公共政策对于解决社会经济问题和权衡问题至关重要,例如,改善平等、生产力或健康,这构成了一个复杂的机制设计问题。政策设计者需要考虑多个目标、政策杠杆以及战略行为者对自身目标的最佳反应。此外,现实世界政策应当解释和有力,以模拟到现实的差距,例如,由于校准问题。现有方法往往局限于一套难以衡量、不产生明确的最佳政策或不考虑战略行为的狭隘的政策杠杆或目标,例如,改善平等、生产力或健康,从而形成一个复杂的机制。因此,在现实世界情景中,政策设计者需要考虑多种目标、政策杠杆和行为反应。 此外,我们在这里显示,AI经济学家框架能够利用两级强化学习(RL)和数据驱动模拟(例如,由于校准问题)和模拟(COVID-19)来优化美国国家政策和联邦补贴的严格性。我们发现,在实际数据中,使用REL来优化政策,或者不考虑战略行为举止,因此,在现实世界情景情景中优化政策政策优化政策优化政策。我们发现,包括机率的大幅改善社会福祉,同时,从过去的公共卫生和机能分析,从过去的政策变化到历史结果。