Large language models (LLMs) are widely described as artificial intelligence, yet their epistemic profile diverges sharply from human cognition. Here we show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced. Tracing the historical shift from symbolic AI and information filtering systems to large-scale generative transformers, we argue that LLMs are not epistemic agents but stochastic pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather than as systems that form beliefs or models of the world. By systematically mapping human and artificial epistemic pipelines, we identify seven epistemic fault lines, divergences in grounding, parsing, experience, motivation, causal reasoning, metacognition, and value. We call the resulting condition Epistemia: a structural situation in which linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without the labor of judgment. We conclude by outlining consequences for evaluation, governance, and epistemic literacy in societies increasingly organized around generative AI.
翻译:大型语言模型(LLMs)被广泛描述为人工智能,但其认知特征与人类认知存在显著差异。本文指出,人类与机器输出之间的表面一致性掩盖了判断生成过程中更深层次的结构性错配。通过追溯从符号人工智能与信息过滤系统到大规模生成式Transformer的历史转变,我们认为LLMs并非认知主体,而是随机模式补全系统——其形式可描述为在语言转换的高维图上的游走,而非形成信念或世界模型的系统。通过系统化地绘制人类与人工认知流程,我们识别出七个认知断层线,分别体现在基础依据、解析方式、经验积累、动机驱动、因果推理、元认知能力及价值取向等方面的分歧。我们将由此产生的状态称为“认知虚境”:这是一种以语言似真性替代认知评估的结构性情境,在无需判断劳动的情况下产生知晓感。最后,我们概述了在日益围绕生成式人工智能组织的社会中,这种状态对评估体系、治理框架及认知素养产生的深远影响。