As a unique biometric that can be perceived at a distance, gait has broad applications in person authentication, social security, and so on. Existing gait recognition methods suffer from changes in viewpoint and clothing and barely consider extracting diverse motion features, a fundamental characteristic in gaits, from gait sequences. This paper proposes a novel motion modeling method to extract the discriminative and robust representation. Specifically, we first extract the motion features from the encoded motion sequences in the shallow layer. Then we continuously enhance the motion feature in deep layers. This motion modeling approach is independent of mainstream work in building network architectures. As a result, one can apply this motion modeling method to any backbone to improve gait recognition performance. In this paper, we combine motion modeling with one commonly used backbone~(GaitGL) as GaitGL-M to illustrate motion modeling. Extensive experimental results on two commonly-used cross-view gait datasets demonstrate the superior performance of GaitGL-M over existing state-of-the-art methods.
翻译:作为在远处可以看到的独特生物鉴别方法,运动在个人认证、社会保障等方面有着广泛的应用。现有的运动识别方法因观点和服装的变化而受到影响,很少考虑从动作序列中提取不同的运动特征,这是从动作序列中取出的一个基本特征。本文件提出一种新的运动模型方法,以提取具有歧视性和稳健的表达方式。具体地说,我们首先从浅层的编码运动序列中提取运动特征。然后,我们不断加强深层的运动特征。这种运动模型方法独立于建设网络结构的主流工作。因此,我们可以将这种运动模型方法应用于任何骨干,以提高游戏识别性能。在本文中,我们把运动模型与通常使用的骨干~(GaitGL)作为GaitGL-M作为模型来说明运动模型。两个常用的跨视图组合的广泛实验结果显示了GaitGL-M相对于现有状态方法的优异性表现。