Expressing various facial emotions is an important social ability for efficient communication between humans. A key challenge in human-robot interaction research is providing androids with the ability to make various human-like facial expressions for efficient communication with humans. The android Nikola, we have developed, is equipped with many actuators for facial muscle control. While this enables Nikola to simulate various human expressions, it also complicates identification of the optimal parameters for producing desired expressions. Here, we propose a novel method that automatically optimizes the facial expressions of our android. We use a machine vision algorithm to evaluate the magnitudes of seven basic emotions, and employ the Bayesian Optimization algorithm to identify the parameters that produce the most convincing facial expressions. Evaluations by naive human participants demonstrate that our method improves the rated strength of the android's facial expressions of anger, disgust, sadness, and surprise compared with the previous method that relied on Ekman's theory and parameter adjustments by a human expert.
翻译:表达各种面部情绪是人类之间有效沟通的重要社会能力。 人类机器人互动研究中的一个关键挑战是为机器人和机器人提供各种像人一样的面部表达方式,以便与人类进行有效沟通。 我们已经开发了Nikola机器人, 配备了许多用于控制面部肌肉的驱动器。 虽然这让Nikola能够模拟各种人类表达方式, 但它也使得确定产生理想表达方式的最佳参数变得复杂。 在这里, 我们提出了一个新颖的方法, 自动优化我们和机器人的面部表达方式。 我们使用机器视觉算法来评估7种基本情感的大小, 并使用Bayesian优化算法来确定产生最令人信服的面部表达方式的参数。 天真的人类参与者的评估表明,我们的方法与以前依赖人类专家Ekman理论和参数调整的方法相比, 提高了机器人愤怒、 厌恶、 悲伤和惊讶的面部表达方式的评级强度。