The launch of Grokipedia, an AI-generated encyclopedia developed by Elon Musk's xAI, was presented as a response to perceived ideological and structural biases in Wikipedia, aiming to produce "truthful" entries via the large language model Grok. Yet whether an AI-driven alternative can escape the biases and limitations of human-edited platforms remains unclear. This study undertakes a large-scale computational comparison of 382 matched article pairs between Grokipedia and Wikipedia. Using metrics across lexical richness, readability, structural organization, reference density, and semantic similarity, we assess how closely the two platforms align in form and substance. The results show that while Grokipedia exhibits strong semantic and stylistic alignment with Wikipedia, it typically produces longer but less lexically diverse articles, with fewer references per word and more variable structural depth. These findings suggest that AI-generated encyclopedic content currently mirrors Wikipedia's informational scope but diverges in editorial norms, favoring narrative expansion over citation-based verification. The implications highlight new tensions around transparency, provenance, and the governance of knowledge in an era of automated text generation.
翻译:由埃隆·马斯克旗下xAI开发的AI生成百科全书Grokipedia的推出,被视为对维基百科意识形态与结构偏见的回应,旨在通过大语言模型Grok生成“真实”条目。然而,AI驱动的替代平台能否摆脱人工编辑平台的偏见与局限仍不明确。本研究对Grokipedia与维基百科的382组匹配文章进行了大规模计算比较。通过词汇丰富度、可读性、结构组织、参考文献密度及语义相似度等多维度指标,评估了两平台在形式与内容上的契合程度。结果表明,尽管Grokipedia在语义和风格上与维基百科高度一致,但其文章通常更长而词汇多样性较低,单位单词参考文献数更少,结构深度波动更大。这些发现表明,当前AI生成的百科全书内容虽能反映维基百科的信息范围,但在编辑规范上存在差异,更倾向于叙事扩展而非基于引证的验证。研究结果凸显了自动化文本生成时代中,关于知识透明度、来源追溯与治理机制的新张力。