Obstacle avoidance is a critical component of the navigation stack required for mobile robots to operate effectively in complex and unknown environments. In this research, three end-to-end Convolutional Neural Networks (CNNs) were trained and evaluated offline and deployed on a differential-drive mobile robot for real-time obstacle avoidance to generate low-level steering commands from synchronized color and depth images acquired by an Intel RealSense D415 RGB-D camera in diverse environments. Offline evaluation showed that the NetConEmb model achieved the best performance with a notably low MedAE of $0.58 \times 10^{-3}$ rad/s. In comparison, the lighter NetEmb architecture, which reduces the number of trainable parameters by approximately 25\% and converges faster, produced comparable results with an RMSE of $21.68 \times 10^{-3}$ rad/s, close to the $21.42 \times 10^{-3}$ rad/s obtained by NetConEmb. Real-time navigation further confirmed NetConEmb's robustness, achieving a 100\% success rate in both known and unknown environments, while NetEmb and NetGated succeeded only in navigating the known environment.
翻译:避障是移动机器人在复杂未知环境中有效运行所需的导航栈关键组成部分。本研究训练并离线评估了三种端到端卷积神经网络(CNN),并将其部署于差速驱动移动机器人上,以实现实时避障。系统通过Intel RealSense D415 RGB-D相机在不同环境中采集同步彩色与深度图像,生成底层转向指令。离线评估表明,NetConEmb模型以$0.58 \\times 10^{-3}$ rad/s的显著低中位绝对误差取得最佳性能。相比之下,轻量级NetEmb架构减少了约25%的可训练参数且收敛更快,其均方根误差为$21.68 \\times 10^{-3}$ rad/s,与NetConEmb的$21.42 \\times 10^{-3}$ rad/s结果相近。实时导航进一步验证了NetConEmb的鲁棒性,在已知与未知环境中均达到100%成功率,而NetEmb与NetGated仅能在已知环境中成功导航。