Code review is a socio-technical practice, yet how software engineers engage in Large Language Model (LLM)-assisted code reviews compared to human peer-led reviews is less understood. We report a two-phase qualitative study with 20 software engineers to understand this. In Phase I, participants exchanged peer reviews and were interviewed about their affective responses and engagement decisions. In Phase II, we introduced a new prompt matching engineers' preferences and probed how characteristics shaped their reactions. We develop an integrative account linking emotional self-regulation to behavioral engagement and resolution. We identify self-regulation strategies that engineers use to regulate their emotions in response to negative feedback: reframing, dialogic regulation, avoidance, and defensiveness. Engagement proceeds through social calibration; engineers align their responses and behaviors to the relational climate and team norms. Trajectories to resolution, in the case of peer-led review, vary by locus (solo/dyad/team) and an internal sense-making process. With the LLM-assisted review, emotional costs and the need for self-regulation seem lower. When LLM feedback aligned with engineers' cognitive expectations, participants reported reduced processing effort and a potentially higher tendency to adopt. We show that LLM-assisted review redirects engagement from emotion management to cognitive load management. We contribute an integrative model of engagement that links emotional self-regulation to behavioral engagement and resolution, showing how affective and cognitive processes influence feedback adoption in peer-led and LLM-assisted code reviews. We conclude that AI is best positioned as a supportive partner to reduce cognitive and emotional load while preserving human accountability and the social meaning of peer review and similar socio-technical activities.
翻译:代码审查是一种社会技术实践,然而软件工程师在大型语言模型(LLM)辅助的代码审查中如何参与,相较于人类同行主导的审查,目前尚缺乏深入理解。我们报告了一项包含20名软件工程师的两阶段定性研究以探讨此问题。在第一阶段,参与者交换同行评审,并接受访谈以了解其情感反应和参与决策。在第二阶段,我们引入了一种符合工程师偏好的新提示,并探究了其特征如何塑造他们的反应。我们构建了一个整合性框架,将情感自我调节与行为参与及问题解决联系起来。我们识别了工程师在面对负面反馈时用于调节情绪的自调节策略:重构、对话调节、回避和防御性。参与过程通过社会校准进行;工程师根据关系氛围和团队规范调整其反应和行为。在同行主导的审查中,问题解决的轨迹因焦点(个人/双人/团队)和内部意义建构过程而异。在LLM辅助的审查中,情感成本和自调节需求似乎较低。当LLM反馈与工程师的认知期望一致时,参与者报告了处理努力的减少和潜在更高的采纳倾向。我们表明,LLM辅助的审查将参与从情感管理转向认知负荷管理。我们贡献了一个整合性参与模型,将情感自我调节与行为参与及解决联系起来,展示了情感和认知过程如何影响同行主导和LLM辅助代码审查中的反馈采纳。我们得出结论:人工智能最适合作为支持性伙伴,以减少认知和情感负荷,同时保留人类责任以及同行评审和类似社会技术活动的社会意义。