The alignment of Large Language Models (LLMs) for multi-turn conversations typically relies on reward signals derived from the content of the text. This approach, however, overlooks a rich, complementary source of signal: the dynamics of the interaction itself. This paper introduces TRACE (Trajectory-based Reward for Agent Collaboration Estimation), a novel reward signal derived from the geometric properties of a dialogue's embedding trajectory--a concept we term 'conversational geometry'. Our central finding is that a reward model trained only on these structural signals achieves a pairwise accuracy (68.20%) comparable to a powerful LLM baseline that analyzes the full transcript (70.04%). Furthermore, a hybrid model combining interaction dynamics with textual analysis achieves the highest performance (80.17%), demonstrating their complementary nature. This work provides strong evidence that for interactive settings, how an agent communicates is as powerful a predictor of success as what it says, offering a new, privacy-preserving framework that not only aligns agents but also serves as a diagnostic tool for understanding the distinct interaction patterns that drive successful collaboration.
翻译:针对多轮对话的大语言模型对齐通常依赖于从文本内容中提取的奖励信号。然而,这种方法忽略了一个丰富且互补的信号来源:交互本身的动态特性。本文提出了TRACE(基于轨迹的智能体协作评估奖励),这是一种从对话嵌入轨迹的几何特性中提取的新型奖励信号——我们称之为‘对话几何’。我们的核心发现是,仅基于这些结构信号训练的奖励模型,其配对准确率(68.20%)与分析完整文本的强大大语言模型基线(70.04%)相当。此外,结合交互动态与文本分析的混合模型实现了最高性能(80.17%),证明了二者的互补性。这项研究提供了有力证据,表明在交互场景中,智能体的沟通方式与其表达内容同样是预测成功的有力指标,从而提出了一种新颖的、保护隐私的框架。该框架不仅能够对齐智能体,还可作为诊断工具,用于理解驱动成功协作的独特交互模式。