Autonomous navigation and path-planning around non-cooperative space objects is an enabling technology for on-orbit servicing and space debris removal systems. The navigation task includes the determination of target object motion, the identification of target object features suitable for grasping, and the identification of collision hazards and other keep-out zones. Given this knowledge, chaser spacecraft can be guided towards capture locations without damaging the target object or without unduly the operations of a servicing target by covering up solar arrays or communication antennas. One way to autonomously achieve target identification, characterization and feature recognition is by use of artificial intelligence algorithms. This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task. The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using experimental data obtained in formation flight simulations in the ORION Lab at Florida Institute of Technology. The simulation scenarios vary the yaw motion of the target object, the chaser approach trajectory, and the lighting conditions in order to test the algorithms in a wide range of realistic and performance limiting situations. The data analyzed include the mean average precision metrics in order to compare the performance of the object detectors. The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
翻译:围绕不合作的空间物体的自主导航和航路规划是一种在轨维修和空间碎片清除系统的赋能技术。导航任务包括确定目标物体运动,确定适合捕捉的目标物体特征,以及查明碰撞危险和其他隐蔽区。根据这种知识,追星航天器可以使用覆盖太阳阵列或通信天线来引导捕捉地点,或者不过分地运行服务目标。自动实现目标识别、定性和特征识别的一个办法是使用人工智能算法。本文讨论了照相机和机器学习算法的组合如何完成相对导航任务。两种基于深学习的物体探测算法的性能,即快速区域进化神经网络(R-CNN)和“只看一次”(YOLOv5),使用佛罗里达技术研究所ORION实验室飞行模拟中获取的实验数据进行测试。模拟情景通过使用人工智能算法、追算法的轨迹和光学条件,以测试在广泛范围的航天器中测试算算法和性能控制轨迹的轨迹。在实际和性能分析中,将平均的轨迹定位系统用于测量结果。