This work presents the most recent advances of the Robotic Testbed for Rendezvous and Optical Navigation (TRON) at Stanford University - the first robotic testbed capable of validating machine learning algorithms for spaceborne optical navigation. The TRON facility consists of two 6 degrees-of-freedom KUKA robot arms and a set of Vicon motion track cameras to reconfigure an arbitrary relative pose between a camera and a target mockup model. The facility includes multiple Earth albedo light boxes and a sun lamp to recreate the high-fidelity spaceborne illumination conditions. After the overview of the facility, this work details the multi-source calibration procedure which enables the estimation of the relative pose between the object and the camera with millimeter-level position and millidegree-level orientation accuracies. Finally, a comparative analysis of the synthetic and TRON simulated imageries is performed using a Convolutional Neural Network (CNN) pre-trained on the synthetic images. The result shows a considerable gap in the CNN's performance, suggesting the TRON simulated images can be used to validate the robustness of any machine learning algorithms trained on more easily accessible synthetic imagery from computer graphics.
翻译:这项工作展示了斯坦福大学(TRON)共聚和光学导航机器人测试床(TRON)的最新进展,这是第一个能够验证空间光学导航机器学习算法的机器人测试床(TRON),该设施由两台6度自由KUKA机器人臂和一套维昆运动跟踪摄像机组成,以重新配置相机与目标模拟模型之间的任意相对构成。该设施包括多个地球反光灯盒和一个太阳灯,以重建高纤维空间传播的照明条件。在设施概览后,这项工作详细说明了多源校准程序,从而能够估计物体与带有毫米级位置和毫度方向瞄准镜的相机之间的相对面貌。最后,对合成图像和TRON模拟图像的比较分析是利用一个先期训练的关于合成图像的进动神经网络进行的。结果显示CNNNN的性能存在相当大的差距,表明TRON模拟图像可用于验证任何从更易获取的合成图像中学习机器图像的可靠性。