We are approaching a future where social robots will progressively become widespread in many aspects of our daily lives, including education, healthcare, work, and personal use. All of such practical applications require that humans and robots collaborate in human environments, where social interaction is unavoidable. Along with verbal communication, successful social interaction is closely coupled with the interplay between nonverbal perception and action mechanisms, such as observation of gaze behaviour and following their attention, coordinating the form and function of hand gestures. Humans perform nonverbal communication in an instinctive and adaptive manner, with no effort. For robots to be successful in our social landscape, they should therefore engage in social interactions in a humanlike way, with increasing levels of autonomy. In particular, nonverbal gestures are expected to endow social robots with the capability of emphasizing their speech, or showing their intentions. Motivated by this, our research sheds a light on modeling human behaviors in social interactions, specifically, forecasting human nonverbal social signals during dyadic interactions, with an overarching goal of developing robotic interfaces that can learn to imitate human dyadic interactions. Such an approach will ensure the messages encoded in the robot gestures could be perceived by interacting partners in a facile and transparent manner, which could help improve the interacting partner perception and makes the social interaction outcomes enhanced.
翻译:我们正走向一个社会机器人将逐渐在我们日常生活的许多方面,包括教育、保健、工作和个人使用中普及的未来。所有这些实际应用都要求人类和机器人在人类环境中合作,因为社会互动是不可避免的。除了口头交流外,成功的社会互动与非口头感知和行动机制之间的相互作用紧密结合,例如观察凝视行为和关注他们,协调手势的形式和功能。人类以本能和适应的方式进行非口头交流,不做任何努力。机器人要成功地在我们的社会景观中进行非口头交流,他们就应当以人样的方式参与社会互动,并增加自主程度。特别是,非口头的手势将赋予社会机器人以强调其言论或显示其意图的能力。受此影响,我们的研究为社会互动中模拟人类行为,特别是以本能和适应的方式预测人类的非口头社会信号,其首要目标是发展机器人界面,以便学会模仿人类的Dicdic互动。这种方式可以确保以透明的方式互动的方式使社会机器人产生更好的互动,这种方式能确保以透明的方式使社会行为伙伴产生更好的互动。