Multimodal human action understanding is a significant problem in computer vision, with the central challenge being the effective utilization of the complementarity among diverse modalities while maintaining model efficiency. However, most existing methods rely on simple late fusion to enhance performance, which results in substantial computational overhead. Although early fusion with a shared backbone for all modalities is efficient, it struggles to achieve excellent performance. To address the dilemma of balancing efficiency and effectiveness, we introduce a self-supervised multimodal skeleton-based action representation learning framework, named Decomposition and Composition. The Decomposition strategy meticulously decomposes the fused multimodal features into distinct unimodal features, subsequently aligning them with their respective ground truth unimodal counterparts. On the other hand, the Composition strategy integrates multiple unimodal features, leveraging them as self-supervised guidance to enhance the learning of multimodal representations. Extensive experiments on the NTU RGB+D 60, NTU RGB+D 120, and PKU-MMD II datasets demonstrate that the proposed method strikes an excellent balance between computational cost and model performance.
翻译:多模态人体动作理解是计算机视觉领域的一个重要问题,其核心挑战在于有效利用不同模态间的互补性,同时保持模型效率。然而,现有方法大多依赖简单的后期融合来提升性能,这导致了巨大的计算开销。尽管采用共享主干网络进行早期融合是高效的,但其难以实现优异的性能。为解决效率与效能之间的平衡困境,我们提出了一种名为"分解与组合"的自监督多模态骨架动作表征学习框架。分解策略将融合后的多模态特征精细分解为不同的单模态特征,随后将其与各自对应的真实单模态特征进行对齐。另一方面,组合策略整合多个单模态特征,将其作为自监督指导来增强多模态表征的学习。在NTU RGB+D 60、NTU RGB+D 120和PKU-MMD II数据集上的大量实验表明,所提方法在计算成本与模型性能之间取得了出色的平衡。