Artificial Intelligence (AI) is rapidly becoming a foundational layer of social, economic, and cognitive infrastructure. At the same time, the training and large-scale deployment of AI systems rely on finite and unevenly distributed energy, networking, and computational resources. This tension exposes a largely unexamined problem in current AI governance: while expanding access to AI is essential for social inclusion and equal opportunity, unconstrained growth in AI use risks unsustainable resource consumption, whereas restricting access threatens to entrench inequality and undermine basic rights. This paper argues that access to AI outputs largely derived from publicly produced knowledge should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection. We show that existing regulatory frameworks largely ignore the coupling between equitable access and resource constraints, leaving critical questions of fairness, sustainability, and long-term societal impact unresolved. To address this gap, we propose recognizing access to AI as an \emph{Intergenerational Civil Right}, establishing a legal and ethical framework that simultaneously safeguards present-day inclusion and the rights of future generations. Beyond normative analysis, we explore how this principle can be technically realized. Drawing on emerging paradigms in IoT--Edge--Cloud computing, decentralized inference, and energy-aware networking, we outline technological trajectories and a strawman architecture for AI Delivery Networks that support equitable access under strict resource constraints. By framing AI as a shared social infrastructure rather than a discretionary market commodity, this work connects governance principles with concrete system design choices, offering a pathway toward AI deployment that is both socially just and environmentally sustainable.
翻译:人工智能正迅速成为社会、经济和认知基础设施的基础层。与此同时,人工智能系统的训练与大规模部署依赖于有限且分布不均的能源、网络和计算资源。这种张力揭示了当前人工智能治理中一个尚未被充分审视的问题:扩大人工智能的获取对于社会包容与机会平等至关重要,但无节制地增加人工智能使用可能导致不可持续的资源消耗,而限制获取则可能固化不平等并损害基本权利。本文主张,对主要源自公共知识产出的人工智能成果的获取,不应仅被视为商业服务,而应作为需要明确保护的基本公民利益。我们指出,现有监管框架大多忽视了公平获取与资源约束之间的耦合关系,致使公平性、可持续性及长期社会影响等关键问题悬而未决。为弥补这一空白,我们提出将人工智能获取确立为一项“代际公民权利”,构建一个同时保障当代包容性与后代权利的法律与伦理框架。除规范分析外,我们探讨了这一原则在技术上如何实现。借鉴物联网-边缘-云计算、去中心化推理及能源感知网络等新兴范式,我们勾勒出在严格资源约束下支持公平获取的技术路径,并提出人工智能分发网络的雏形架构。通过将人工智能定位为共享的社会基础设施而非可自由支配的市场商品,本研究将治理原则与具体系统设计选择相联结,为兼具社会公正与环境可持续性的人工智能部署提供可行路径。