This paper examines the current landscape of AI regulations across various jurisdictions, highlighting divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework to bridge the global divide. While the U.N. is developing an international AI governance framework and the G7 has endorsed a risk-based approach, there is no consensus on their details. The EU, Canada, and Brazil (and potentially South Korea) follow a horizontal or lateral approach that postulates the homogeneity of AI, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.S., the U.K., Israel, and Switzerland (and potentially China) have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. Horizonal approaches like the EU AI Act do not guarantee sufficient levels of proportionality and foreseeability; rather, this approach imposes a one-size-fits-all bundle of regulations on any high-risk AI, when feasible, to differentiate between various AI models and legislate them individually. The context-specific approach holds greater promise, but requires further development regarding details, coherent regulatory objectives, and commensurable standards. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and utilization for specific tasks; and categorizes these tasks based on their application and interaction with humans as follows: autonomous, discriminative (allocative, punitive, and cognitive), and generative AI. To ensure coherency, each category is assigned regulatory objectives. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics to be readily integrated into AI systems.
翻译:暂无翻译