As a transformative general-purpose technology, AI has empowered various industries and will continue to shape our lives through ubiquitous applications. Despite the enormous benefits from wide-spread AI deployment, it is crucial to address associated downside risks and therefore ensure AI advances are safe, fair, responsible, and aligned with human values. To do so, we need to establish effective AI governance. In this work, we show that the strategic interaction between the regulatory agencies and AI firms has an intrinsic structure reminiscent of a Stackelberg game, which motivates us to propose a game-theoretic modeling framework for AI governance. In particular, we formulate such interaction as a Stackelberg game composed of a leader and a follower, which captures the underlying game structure compared to its simultaneous play counterparts. Furthermore, the choice of the leader naturally gives rise to two settings. And we demonstrate that our proposed model can serves as a unified AI governance framework from two aspects: firstly we can map one setting to the AI governance of civil domains and the other to the safety-critical and military domains, secondly, the two settings of governance could be chosen contingent on the capability of the intelligent systems. To the best of our knowledge, this work is the first to use game theory for analyzing and structuring AI governance. We also discuss promising directions and hope this can help stimulate research interest in this interdisciplinary area. On a high, we hope this work would contribute to develop a new paradigm for technology policy: the quantitative and AI-driven methods for the technology policy field, which holds significant promise for overcoming many shortcomings of existing qualitative approaches.
翻译:暂无翻译