Affect is often expressed via non-verbal body language such as actions/gestures, which are vital indicators for human behaviors. Recent studies on recognition of fine-grained actions/gestures in monocular images have mainly focused on modeling spatial configuration of body parts representing body pose, human-objects interactions and variations in local appearance. The results show that this is a brittle approach since it relies on accurate body parts/objects detection. In this work, we argue that there exist local discriminative semantic regions, whose "informativeness" can be evaluated by the attention mechanism for inferring fine-grained gestures/actions. To this end, we propose a novel end-to-end \textbf{Regional Attention Network (RAN)}, which is a fully Convolutional Neural Network (CNN) to combine multiple contextual regions through attention mechanism, focusing on parts of the images that are most relevant to a given task. Our regions consist of one or more consecutive cells and are adapted from the strategies used in computing HOG (Histogram of Oriented Gradient) descriptor. The model is extensively evaluated on ten datasets belonging to 3 different scenarios: 1) head pose recognition, 2) drivers state recognition, and 3) human action and facial expression recognition. The proposed approach outperforms the state-of-the-art by a considerable margin in different metrics.
翻译:在这项工作中,我们争论说,存在地方歧视性的语义区域,其“信息规范性”可由推断精细动作/行动的注意机制加以评估。为此,我们建议建立一个新型的端到端的图像空间配置,代表身体的姿势、人体物体的相互作用和当地外观的变化。结果显示,这是一个非常困难的方法,因为它依赖于精确的肢体部位/物体探测。在这项工作中,我们争论说,存在地方歧视性语义区域,其“信息规范性”可由推断精细动作/行动的注意机制加以评估。为此,我们建议建立一个新型的端到端的身体部分空间配置模型,代表身体的姿势、人体物体的相互作用和局部外观。一个完全进化的神经网络(CN),通过关注机制将多个背景区域结合在一起,侧重于与某项任务最相关的部分图像。我们区域由一个或多个连续的细胞组成,并且从用于计算HOG模型(方向的边缘动作/动作)所用的战略中加以调整。一个广泛的结构化的表态识别,一个不同的表态式的表态,一个不同的表态的表态,一个不同的表态化的表态化的表态,一个不同的表态化的表态的表态的表态的表态,一个表态化的表态化的表态化的表态化的表态化的表态化的表态化的表态化的表态被评估。