Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
翻译:在工业机器人中,目前使用四重机器人作为机械辅助手段,使若干日常任务自动化。然而,目前,在国内环境中使用这种机器人仍然是研究的一部分。本文讨论对这样一个机器人的理解和虚拟模拟,这种机器人能够探测和理解人类情感,产生其步步态,并通过屏幕上的音响和表达方式作出反应。为此,我们使用强化学习和软件工程概念的结合,模拟一个四重机器人,它能够理解情感,穿越各种地形,探测声音来源,并利用视听反馈对情感作出反应。本文的目的是建立模拟一个四重机器人的框架,它具有情感智能,主要能够利用运动或音频反应对视听模拟作出响应。从演讲中检测的情绪不如ERANNs或Zeta政策学习的音效和表达方式,仍然管理着63.5%的准确性。视频情感探测系统产生的结果几乎与艺术状态相当,精确度达99.66%。由于它的“在政策上”学习一个四重立的机器人,PPO的演算法非常快速地向我们展示了这种稳定的演练过程,使得这种稳定的演算法能够快速地演化它。