Humans exhibit adaptive, context-sensitive responses to egocentric visual input. However, faithfully modeling such reactions from egocentric video remains challenging due to the dual requirements of strictly causal generation and precise 3D spatial alignment. To tackle this problem, we first construct the Human Reaction Dataset (HRD) to address data scarcity and misalignment by building a spatially aligned egocentric video-reaction dataset, as existing datasets (e.g., ViMo) suffer from significant spatial inconsistency between the egocentric video and reaction motion, e.g., dynamically moving motions are always paired with fixed-camera videos. Leveraging HRD, we present EgoReAct, the first autoregressive framework that generates 3D-aligned human reaction motions from egocentric video streams in real-time. We first compress the reaction motion into a compact yet expressive latent space via a Vector Quantised-Variational AutoEncoder and then train a Generative Pre-trained Transformer for reaction generation from the visual input. EgoReAct incorporates 3D dynamic features, i.e., metric depth, and head dynamics during the generation, which effectively enhance spatial grounding. Extensive experiments demonstrate that EgoReAct achieves remarkably higher realism, spatial consistency, and generation efficiency compared with prior methods, while maintaining strict causality during generation. We will release code, models, and data upon acceptance.
翻译:人类对第一人称视觉输入表现出适应性强、情境敏感的反应。然而,由于需要同时满足严格因果生成与精确三维空间对齐的双重要求,从第一人称视频中忠实建模此类反应仍具挑战性。为解决该问题,我们首先构建了人体反应数据集(HRD),以通过建立空间对齐的第一人称视频-反应数据集来应对数据稀缺与错位问题——现有数据集(如ViMo)中第一人称视频与反应动作之间存在显著的空间不一致性,例如动态移动的动作总是与固定视角的视频配对。基于HRD,我们提出了EgoReAct,这是首个能够从第一人称视频流中实时生成三维对齐的人体反应动作的自回归框架。我们首先通过矢量量化变分自编码器将反应动作压缩至紧凑而富有表现力的潜在空间,随后训练一个生成式预训练Transformer以根据视觉输入生成反应。EgoReAct在生成过程中融合了三维动态特征(即度量深度与头部动态),有效增强了空间定位能力。大量实验表明,与现有方法相比,EgoReAct在保持生成过程严格因果性的同时,实现了显著更高的真实感、空间一致性与生成效率。我们将在论文被接受后公开代码、模型与数据。