We propose FootFormer, a cross-modality approach for jointly predicting human motion dynamics directly from visual input. On multiple datasets, FootFormer achieves statistically significantly better or equivalent estimates of foot pressure distributions, foot contact maps, and center of mass (CoM), as compared with existing methods that generate one or two of those measures. Furthermore, FootFormer achieves SOTA performance in estimating stability-predictive components (CoP, CoM, BoS) used in classic kinesiology metrics. Code and data are available at https://github.com/keatonkraiger/Vision-to-Stability.git.
翻译:我们提出FootFormer,一种跨模态方法,用于直接从视觉输入中联合预测人体运动动力学。在多个数据集上,与现有仅生成其中一至两种指标的方法相比,FootFormer在足底压力分布、足部接触图及质心(CoM)的估计上取得了统计学意义上显著更优或相当的结果。此外,FootFormer在估计经典运动学指标中使用的稳定性预测组件(压力中心CoP、质心CoM、支撑面BoS)方面达到了最先进的性能。代码与数据可在https://github.com/keatonkraiger/Vision-to-Stability.git获取。