Controlling the manner in which a character moves in a real-time animation system is a challenging task with useful applications. Existing style transfer systems require access to a reference content motion clip, however, in real-time systems the future motion content is unknown and liable to change with user input. In this work we present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases. An additional style modulation network uses feature-wise transformations to modulate style in real-time. To evaluate our method, we create and release a new style modelling dataset, 100STYLE, containing over 4 million frames of stylised locomotion data in 100 different styles that present a number of challenges for existing systems. To model these styles, we extend the local phase calculation with a contact-free formulation. In comparison to other methods for real-time style modelling, we show our system is more robust and efficient in its style representation while improving motion quality.
翻译:控制字符在实时动画系统中移动的方式是一项具有挑战性的任务,具有有用的应用。现有的风格传输系统需要访问参考内容运动剪辑,然而,在实时系统中,未来运动内容并不为人所知,并可能随着用户输入而变化。在这项工作中,我们提出了一个风格建模系统,使用动画合成网络根据当地运动阶段模拟运动内容。另一个风格调制网络使用特效变换,实时调节风格。为了评估我们的方法,我们创建并发布一个新的风格建模数据集,即100-STYLE, 包含400多万个现有系统所面临挑战的螺旋化电动数据框架。为了建模这些样式,我们用无接触的配方来扩展本地阶段的计算。与实时风格建模的其他方法相比,我们展示了我们的系统在改进运动质量的同时,在风格代表方面更加有力和高效。