Animating an avatar that reflects a user's action in the VR world enables natural interactions with the virtual environment. It has the potential to allow remote users to communicate and collaborate in a way as if they met in person. However, a typical VR system provides only a very sparse set of up to three positional sensors, including a head-mounted display (HMD) and optionally two hand-held controllers, making the estimation of the user's full-body movement a difficult problem. In this work, we present a data-driven physics-based method for predicting the realistic full-body movement of the user according to the transformations of these VR trackers and simulating an avatar character to mimic such user actions in the virtual world in real-time. We train our system using reinforcement learning with carefully designed pretraining processes to ensure the success of the training and the quality of the simulation. We demonstrate the effectiveness of the method with an extensive set of examples.
翻译:反映用户在VR世界中的动作的动画动画使用户能够与虚拟环境进行自然互动。 它有可能让远程用户能够像亲自见面一样进行交流与合作。 但是,典型的 VR 系统只提供非常稀少的最多三个定位传感器, 包括一个头顶显示器(HMD)和两个手控控制器, 使估计用户全体运动成为一个难题。 在这项工作中, 我们提出了一个数据驱动物理法, 用来根据这些VR跟踪器的变换来预测用户现实的全体运动, 并模拟一个虚拟世界中模拟这种用户行动的安达特性。 我们用精心设计的训练前过程来培训我们的系统, 以确保培训的成功和模拟的质量。 我们用一系列广泛的例子来展示该方法的有效性。