As artificial agents increasingly integrate into professional environments, fundamental questions have emerged about how societal biases influence human-robot selection decisions. We conducted two comprehensive experiments (N = 1,038) examining how occupational contexts and stereotype activation shape robotic agent choices across construction, healthcare, educational, and athletic domains. Participants made selections from artificial agents that varied systematically in skin tone and anthropomorphic characteristics. Our study revealed distinct context-dependent patterns. Healthcare and educational scenarios demonstrated strong favoritism toward lighter-skinned artificial agents, while construction and athletic contexts showed greater acceptance of darker-toned alternatives. Participant race was associated with systematic differences in selection patterns across professional domains. The second experiment demonstrated that exposure to human professionals from specific racial backgrounds systematically shifted later robotic agent preferences in stereotype-consistent directions. These findings show that occupational biases and color-based discrimination transfer directly from human-human to human-robot evaluation contexts. The results highlight mechanisms through which robotic deployment may unintentionally perpetuate existing social inequalities.
翻译:随着人工智能体日益融入职业环境,关于社会偏见如何影响人机选择决策的基础性问题逐渐浮现。我们开展了两个综合实验(N = 1,038),研究职业情境与刻板印象激活如何影响建筑、医疗、教育和体育领域中机器人代理的选择。参与者从肤色和拟人化特征系统化变化的人工智能体中进行选择。我们的研究揭示了明显的环境依赖模式:医疗和教育场景显示出对浅肤色人工智能体的强烈偏好,而建筑和体育情境则对深肤色替代方案表现出更高的接受度。参与者种族与不同专业领域选择模式的系统性差异存在关联。第二个实验表明,接触特定种族背景的人类专业人员会系统性地将后续机器人代理偏好转向与刻板印象一致的方向。这些发现表明,职业偏见和基于肤色的歧视会直接从人-人评估情境迁移至人-机评估情境。研究结果揭示了机器人部署可能无意中延续现有社会不平等现象的内在机制。