Images of spacecraft photographed from other spacecraft operating in outer space are difficult to come by, especially at a scale typically required for deep learning tasks. Semantic image segmentation, object detection and localization, and pose estimation are well researched areas with powerful results for many applications, and would be very useful in autonomous spacecraft operation and rendezvous. However, recent studies show that these strong results in broad and common domains may generalize poorly even to specific industrial applications on earth. To address this, we propose a method for generating synthetic image data that are labelled for semantic segmentation, generalizable to other tasks, and provide a prototype synthetic image dataset consisting of 2D monocular images of unmanned spacecraft, in order to enable further research in the area of autonomous spacecraft rendezvous. We also present a strong benchmark result (S{\o}rensen-Dice coefficient 0.8723) on these synthetic data, suggesting that it is feasible to train well-performing image segmentation models for this task, especially if the target spacecraft and its configuration are known.
翻译:从其他在外层空间运行的航天器拍摄的航天器图像很难获得,特别是在深层学习任务通常需要的尺度上。语义图像分割、物体探测和定位以及作出估计是研究周密的领域,在许多应用中都会产生强有力的结果,在自主航天器运行和会合方面非常有用。然而,最近的研究表明,这些广泛和共同领域的强效结果可能不十分普遍,甚至不及于地球上的具体工业应用。为了解决这个问题,我们提议了一种方法,用于生成合成图像数据,这些数据被标为可概括到其他任务的语义分解,并提供由2D单眼航天器图像组成的原型合成图像数据集,以便能够在自主航天器会合领域进行进一步研究。我们还对这些合成数据提出了强有力的基准结果(S~wrensen-Dice 系数0.8723),表明为这项任务培训运行良好的图像分解模型是可行的,特别是如果目标航天器及其配置是已知的。