Deep reinforcement learning (RL) algorithms enable the development of fully autonomous agents that can interact with the environment. Brain-computer interface (BCI) systems decipher human implicit brain signals regardless of the explicit environment. In this study, we integrated deep RL and BCI to improve beneficial human interventions in autonomous systems and the performance in decoding brain activities by considering environmental factors. Shared autonomy was allowed between the action command decoded from the electroencephalography (EEG) of the human agent and the action generated from the twin delayed DDPG (TD3) agent for a given environment. Our proposed copilot control scheme with a full blocker (Co-FB) significantly outperformed the individual EEG (EEG-NB) or TD3 control. The Co-FB model achieved a higher target approaching score, lower failure rate, and lower human workload than the EEG-NB model. The Co-FB control scheme had a higher invisible target score and level of allowed human intervention than the TD3 model. We also proposed a disparity d-index to evaluate the effect of contradicting agent decisions on the control accuracy and authority of the copilot model. We found a significant correlation between the control authority of the TD3 agent and the performance improvement of human EEG classification with respect to the d-index. We also observed that shifting control authority to the TD3 agent improved performance when BCI decoding was not optimal. These findings indicate that the copilot system can effectively handle complex environments and that BCI performance can be improved by considering environmental factors. Future work should employ continuous action space and different multi-agent approaches to evaluate copilot performance.
翻译:暂无翻译