Deep vision models are now mature enough to be integrated in industrial and possibly critical applications such as autonomous navigation. Yet, data collection and labeling to train such models requires too much efforts and costs for a single company or product. This drawback is more significant in critical applications, where training data must include all possible conditions including rare scenarios. In this perspective, generating synthetic images is an appealing solution, since it allows a cheap yet reliable covering of all the conditions and environments, if the impact of the synthetic-to-real distribution shift is mitigated. In this article, we consider the case of runway detection that is a critical part in autonomous landing systems developed by aircraft manufacturers. We propose an image generation approach based on a commercial flight simulator that complements a few annotated real images. By controlling the image generation and the integration of real and synthetic data, we show that standard object detection models can achieve accurate prediction. We also evaluate their robustness with respect to adverse conditions, in our case nighttime images, that were not represented in the real data, and show the interest of using a customized domain adaptation strategy.
翻译:深度视觉模型现已足够成熟,可集成于工业乃至关键应用(如自主导航)中。然而,训练此类模型所需的数据收集与标注工作,对于单一公司或产品而言仍过于耗费成本与精力。这一缺陷在关键应用中尤为显著,因为训练数据必须涵盖包括罕见场景在内的所有可能条件。在此背景下,生成合成图像成为一种颇具吸引力的解决方案——若能缓解合成数据与真实数据分布偏移的影响,该方法便能以低廉成本可靠覆盖所有条件与环境。本文以飞机制造商开发的自主着陆系统中的关键组成部分——跑道检测——为研究对象,提出一种基于商用飞行模拟器的图像生成方法,以补充少量已标注的真实图像。通过控制图像生成过程并整合真实与合成数据,我们证明标准目标检测模型能够实现精确预测。此外,我们评估了模型在不利条件(本文以未在真实数据中体现的夜间图像为例)下的鲁棒性,并展示了采用定制化域适应策略的优势。