Communicative gestures and speech acoustic are tightly linked. Our objective is to predict the timing of gestures according to the acoustic. That is, we want to predict when a certain gesture occurs. We develop a model based on a recurrent neural network with attention mechanism. The model is trained on a corpus of natural dyadic interaction where the speech acoustic and the gesture phases and types have been annotated. The input of the model is a sequence of speech acoustic and the output is a sequence of gesture classes. The classes we are using for the model output is based on a combination of gesture phases and gesture types. We use a sequence comparison technique to evaluate the model performance. We find that the model can predict better certain gesture classes than others. We also perform ablation studies which reveal that fundamental frequency is a relevant feature for gesture prediction task. In another sub-experiment, we find that including eyebrow movements as acting as beat gesture improves the performance. Besides, we also find that a model trained on the data of one given speaker also works for the other speaker of the same conversation. We also perform a subjective experiment to measure how respondents judge the naturalness, the time consistency, and the semantic consistency of the generated gesture timing of a virtual agent. Our respondents rate the output of our model favorably.
翻译:通信手势和语音声音是紧密相连的。 我们的目标是根据声学预测手势的时间安排。 也就是说, 我们想要预测某一手势的时机。 我们想预测何时会发生某种手势。 我们开发一个基于经常性神经网络的模型, 并配有关注机制。 该模型在自然的双向互动中接受培训, 语言声学和手势阶段和类型都有附加说明。 模型的输入是语音声波序列, 输出是手势等级的序列。 我们用于模型输出的等级是以手势阶段和手势类型的组合为基础的。 我们使用一个序列比较技术来评价模型的性能。 我们发现, 模型可以预测出某些手势的等级比其他的要好。 我们还进行减缩研究, 显示基本的频率是手势预测任务的一个相关特征。 在另一个子实验中, 我们发现将眉毛运动作为击动手势的动作可以提高性能。 此外, 我们还发现, 以某一演讲者的数据培训的模型也可以为同一对话的另一位演讲者使用。 我们还进行一个主观的实验, 以测量应答者如何判断自然性, 我们的姿态的姿态的时势, 以及虚拟输出反应的机率的精确性反应率。