In narrow spaces, motion planning based on the traditional hierarchical autonomous system could cause collisions due to mapping, localization, and control noises, especially for car-like Ackermann-steering robots which suffer from non-convex and non-holonomic kinematics. To tackle these problems, we leverage deep reinforcement learning which is verified to be effective in self-decision-making, to self-explore in narrow spaces without a given map and destination while avoiding collisions. Specifically, based on our Ackermann-steering rectangular-shaped ZebraT robot and its Gazebo simulator, we propose the rectangular safety region to represent states and detect collisions for rectangular-shaped robots, and a carefully crafted reward function for reinforcement learning that does not require the waypoint guidance. For validation, the robot was first trained in a simulated narrow track. Then, the well-trained model was transferred to other simulation tracks and could outperform other traditional methods including classical and learning methods. Finally, the trained model is demonstrated in the real world with our ZebraT robot.
翻译:暂无翻译