We introduce a method for generating realistic pedestrian trajectories and full-body animations that can be controlled to meet user-defined goals. We draw on recent advances in guided diffusion modeling to achieve test-time controllability of trajectories, which is normally only associated with rule-based systems. Our guided diffusion model allows users to constrain trajectories through target waypoints, speed, and specified social groups while accounting for the surrounding environment context. This trajectory diffusion model is integrated with a novel physics-based humanoid controller to form a closed-loop, full-body pedestrian animation system capable of placing large crowds in a simulated environment with varying terrains. We further propose utilizing the value function learned during RL training of the animation controller to guide diffusion to produce trajectories better suited for particular scenarios such as collision avoidance and traversing uneven terrain. Video results are available on the project page at https://nv-tlabs.github.io/trace-pace .
翻译:我们提出了一种生成逼真行人轨迹和全身动画的方法,该方法可根据用户定义的目标进行控制。我们借鉴了最近在引导扩散模型方面的进展,实现了测试时间轨迹可控性,这通常仅与基于规则的系统相关。我们的引导扩散模型允许用户通过目标航点、速度和指定的社交群体来限制轨迹,同时考虑周围环境的上下文。该轨迹扩散模型与一种新颖的基于物理的人形控制器相结合,形成一个封闭循环的全身行人动画系统,能够在模拟环境中放置大量人群,在不同的地形上行走。我们进一步提出利用在强化学习培训动画控制器过程中学习的价值函数来引导扩散,以产生更适合特定场景的轨迹,如避免碰撞和穿越不平坦的地形。项目页面https://nv-tlabs.github.io/trace-pace 上提供视频结果。