In this paper, HeadPosr is proposed to predict the head poses using a single RGB image. \textit{HeadPosr} uses a novel architecture which includes a transformer encoder. In concrete, it consists of: (1) backbone; (2) connector; (3) transformer encoder; (4) prediction head. The significance of using a transformer encoder for HPE is studied. An extensive ablation study is performed on varying the (1) number of encoders; (2) number of heads; (3) different position embeddings; (4) different activations; (5) input channel size, in a transformer used in HeadPosr. Further studies on using: (1) different backbones, (2) using different learning rates are also shown. The elaborated experiments and ablations studies are conducted using three different open-source widely used datasets for HPE, i.e., 300W-LP, AFLW2000, and BIWI datasets. Experiments illustrate that \textit{HeadPosr} outperforms all the state-of-art methods including both the landmark-free and the others based on using landmark or depth estimation on the AFLW2000 dataset and BIWI datasets when trained with 300W-LP. It also outperforms when averaging the results from the compared datasets, hence setting a benchmark for the problem of HPE, also demonstrating the effectiveness of using transformers over the state-of-the-art.
翻译:本文中提议, 头 Posr 使用一个 RGB 图像来预测头部的配置 。\ textit{ HeadPosr} 使用一个包含变压器编码器的新结构。 在混凝土中, 它由以下三个部分组成:(1) 脊柱; (2) 连接器; (3) 变压器编码器; (4) 预测头部。 正在研究使用变压器编码器对 HPE 使用变压器编码器的重要性。 对(1) 编码器数量; (2) 头数; (3) 不同位置嵌入; (4) 不同激活; (5) 输入通道大小, 在 HeadPosr 中使用的变压器。 关于使用:(1) 不同的骨架, (2) 使用不同的学习率。 详细实验和校正研究正在使用三种不同的开源进行, 广泛用于 HPEPE, 即 300W- LP, AFLW 2000 和 BIW 数据集 的变压式。 实验表明, 所有的状态方法都不符合, 包括无里程碑的变压式的变压器, 2000 和其他方法, 也使用经过测试的AFLS- RB-W 数据, 的比的平- RB- RB- RB-S 的 的 的平压压压压压压数据。