Accurate control of autonomous marine robots still poses challenges due to the complex dynamics of the environment. In this paper, we propose a Deep Reinforcement Learning (DRL) approach to train a controller for autonomous surface vessel (ASV) trajectory tracking and compare its performance with an advanced nonlinear model predictive controller (NMPC) in real environments. Taking into account environmental disturbances (e.g., wind, waves, and currents), noisy measurements, and non-ideal actuators presented in the physical ASV, several effective reward functions for DRL tracking control policies are carefully designed. The control policies were trained in a simulation environment with diverse tracking trajectories and disturbances. The performance of the DRL controller has been verified and compared with the NMPC in both simulations with model-based environmental disturbances and in natural waters. Simulations show that the DRL controller has 53.33% lower tracking error than that of NMPC. Experimental results further show that, compared to NMPC, the DRL controller has 35.51% lower tracking error, indicating that DRL controllers offer better disturbance rejection in river environments than NMPC.
翻译:由于环境的复杂动态,对自主海洋机器人的精确控制仍构成挑战。在本文件中,我们建议采用深强化学习(DRL)方法,在实际环境中培训自主水面容器控制器(ASV)轨迹跟踪,并将其性能与高级非线性模型预测控制器(NMPPC)进行对比。考虑到物理ASV中的环境扰动(如风、波浪和洋流)、噪音测量和非理想作用器,对DRV跟踪控制政策的若干有效奖励功能进行了仔细设计。控制政策是在模拟环境中经过培训的,模拟环境跟踪轨迹和扰动。DRL控制器的性能经过核查,并与NMPC在模拟环境模型环境扰动和自然水域中的性能比较。模拟显示,DRL控制器的跟踪错误比NMPC低53.33%。实验结果还表明,与NMPC相比,DRL控制器的跟踪错误率低35.51%,表明DRL控制器在河流环境中的扰动阻力阻力效果比NMPC要好。