Comprehensively and flexibly capturing the complex spatio-temporal dependencies of human motion is critical for multi-person motion prediction. Existing methods grapple with two primary limitations: i) Inflexible spatiotemporal representation due to reliance on positional encodings for capturing spatiotemporal information. ii) High computational costs stemming from the quadratic time complexity of conventional attention mechanisms. To overcome these limitations, we propose the Spatiotemporal-Untrammelled Mixture of Experts (ST-MoE), which flexibly explores complex spatio-temporal dependencies in human motion and significantly reduces computational cost. To adaptively mine complex spatio-temporal patterns from human motion, our model incorporates four distinct types of spatiotemporal experts, each specializing in capturing different spatial or temporal dependencies. To reduce the potential computational overhead while integrating multiple experts, we introduce bidirectional spatiotemporal Mamba as experts, each sharing bidirectional temporal and spatial Mamba in distinct combinations to achieve model efficiency and parameter economy. Extensive experiments on four multi-person benchmark datasets demonstrate that our approach not only outperforms state-of-art in accuracy but also reduces model parameter by 41.38% and achieves a 3.6x speedup in training. The code is available at https://github.com/alanyz106/ST-MoE.
翻译:全面而灵活地捕捉人体运动的复杂时空依赖关系对于多人运动预测至关重要。现有方法主要受限于两点:i) 由于依赖位置编码来捕获时空信息,导致时空表示不够灵活;ii) 传统注意力机制的二次时间复杂度带来了高昂的计算成本。为克服这些局限,我们提出了时空无约束专家混合模型(ST-MoE),该模型能够灵活探索人体运动中的复杂时空依赖关系,并显著降低计算成本。为自适应地从人体运动中挖掘复杂的时空模式,我们的模型融合了四种不同类型的时空专家,每种专家专门负责捕捉不同的空间或时间依赖关系。为在集成多个专家的同时减少潜在的计算开销,我们引入了双向时空Mamba作为专家模块,每个模块通过不同组合方式共享双向时间和空间Mamba,以实现模型效率与参数经济性。在四个多人基准数据集上的大量实验表明,我们的方法不仅在精度上优于现有最优方法,还将模型参数量减少了41.38%,并实现了3.6倍的训练加速。代码已发布于 https://github.com/alanyz106/ST-MoE。