Large Language Models significantly influence social interactions, decision-making, and information dissemination, underscoring the need to understand the implicit socio-cognitive attitudes, referred to as "worldviews", encoded within these systems. Unlike previous studies predominantly addressing demographic and ethical biases as fixed attributes, our study explores deeper cognitive orientations toward authority, equality, autonomy, and fate, emphasizing their adaptability in dynamic social contexts. We introduce the Social Worldview Taxonomy (SWT), an evaluation framework grounded in Cultural Theory, operationalizing four canonical worldviews, namely Hierarchy, Egalitarianism, Individualism, and Fatalism, into quantifiable sub-dimensions. Through extensive analysis of 28 diverse LLMs, we identify distinct cognitive profiles reflecting intrinsic model-specific socio-cognitive structures. Leveraging principles from Social Referencing Theory, our experiments demonstrate that explicit social cues systematically modulate these profiles, revealing robust patterns of cognitive adaptability. Our findings provide insights into the latent cognitive flexibility of LLMs and offer computational scientists practical pathways toward developing more transparent, interpretable, and socially responsible AI systems
翻译:大型语言模型深刻影响着社会互动、决策制定与信息传播,这凸显了理解这些系统内部编码的隐性社会认知态度(即“世界观”)的必要性。与先前主要将人口统计学和伦理偏见视为固定属性的研究不同,本研究探讨了针对权威、平等、自主和命运等更深层次的认知取向,并强调其在动态社会情境中的适应性。我们引入了基于文化理论的社会世界观分类法(SWT),该评估框架将四种典型世界观——等级主义、平等主义、个人主义和宿命论——操作化为可量化的子维度。通过对28个多样化的大型语言模型进行广泛分析,我们识别出反映模型内在特定社会认知结构的独特认知轮廓。借鉴社会参照理论的原则,我们的实验表明,明确的社会线索能够系统地调节这些认知轮廓,揭示出认知适应性的稳健模式。我们的研究结果为理解大型语言模型的潜在认知灵活性提供了洞见,并为计算科学家开发更透明、可解释且对社会负责的人工智能系统提供了实用路径。