The integration of artificial intelligence into everyday decision-making has reshaped patterns of selective trust, yet the cognitive mechanisms behind context-dependent preferences for AI versus human informants remain unclear. We applied a Bayesian Hierarchical Sequential Sampling Model (HSSM) to analyze how 102 Colombian university students made trust decisions across 30 epistemic (factual) and social (interpersonal) scenarios. Results show that context-dependent trust is primarily driven by differences in drift rate (v), the rate of evidence accumulation, rather than initial bias (z) or response caution (a). Epistemic scenarios produced strong negative drift rates (mean v = -1.26), indicating rapid evidence accumulation favoring AI, whereas social scenarios yielded positive drift rates (mean v = 0.70) favoring humans. Starting points were near neutral (z = 0.52), indicating minimal prior bias. Drift rate showed a strong within-subject association with signed confidence (Fisher-z-averaged r = 0.736; 95 percent bootstrap CI 0.699 to 0.766; 97.8 percent of individual correlations positive, N = 93), suggesting that model-derived evidence accumulation closely mirrors participants' moment-to-moment confidence. These dynamics may help explain the fragility of AI trust: in epistemic domains, rapid but low-vigilance evidence processing may promote uncalibrated reliance on AI that collapses quickly after errors. Interpreted through epistemic vigilance theory, the results indicate that domain-specific vigilance mechanisms modulate evidence accumulation. The findings inform AI governance by highlighting the need for transparency features that sustain vigilance without sacrificing efficiency, offering a mechanistic account of selective trust in human-AI collaboration.
翻译:人工智能融入日常决策已重塑选择性信任的模式,然而,在依赖情境下对AI与人类信息提供者产生偏好的认知机制仍不明确。本研究采用贝叶斯分层序贯采样模型(HSSM),分析了102名哥伦比亚大学生在30个认知性(事实性)与社会性(人际性)情境中的信任决策。结果表明,情境依赖性信任主要受漂移率(v,即证据积累速率)差异驱动,而非初始偏差(z)或反应谨慎度(a)。认知性情境产生强烈的负漂移率(均值v = -1.26),表明证据快速积累并倾向于AI;而社会性情境产生正漂移率(均值v = 0.70),倾向于人类。起始点接近中性(z = 0.52),表明先验偏差极小。漂移率与符号化置信度呈现强烈的个体内关联(Fisher-z平均r = 0.736;95%自助置信区间0.699至0.766;93名参与者中97.8%的个体相关性为正),表明模型推导的证据积累过程紧密反映了参与者实时置信度变化。这些动态机制可能有助于解释AI信任的脆弱性:在认知领域,快速但低警觉的证据处理可能促进对AI的未校准依赖,并在错误发生后迅速崩溃。通过认知警惕理论解读,结果表明领域特异性警惕机制调节证据积累过程。研究结果为AI治理提供启示,强调需要设计既能维持警惕性又不牺牲效率的透明度特征,从而为人类-AI协作中的选择性信任提供机制性解释。