The growth and permeation of artificial intelligence (AI) technologies across society has drawn focus to the ways in which the responsible use of these technologies can be facilitated through AI governance. Increasingly, large companies and governments alike have begun to articulate and, in some cases, enforce governance preferences through AI policy. Yet existing literature documents an unwieldy heterogeneity in ethical principles for AI governance, while our own prior research finds that discussions of the implications of AI policy are not yet present in the computer science (CS) curriculum. In this context, overlapping jurisdictions and even contradictory policy preferences across private companies, local, national, and multinational governments create a complex landscape for AI policy which, we argue, will require AI developers able adapt to an evolving regulatory environment. Preparing computing students for the new challenges of an AI-dominated technology industry is therefore a key priority for the CS curriculum. In this discussion paper, we seek to articulate a framework for integrating discussions on the nascent AI policy landscape into computer science courses. We begin by summarizing recent AI policy efforts in the United States and European Union. Subsequently, we propose guiding questions to frame class discussions around AI policy in technical and non-technical (e.g., ethics) CS courses. Throughout, we emphasize the connection between normative policy demands and still-open technical challenges relating to their implementation and enforcement through code and governance structures. This paper therefore represents a valuable contribution towards bridging research and discussions across the areas of AI policy and CS education, underlining the need to prepare AI engineers to interact with and adapt to societal policy preferences.
翻译:暂无翻译