As deep learning continues to dominate all state-of-the-art computer vision tasks, it is increasingly becoming an essential building block for robotic perception. This raises important questions concerning the safety and reliability of learning-based perception systems. There is an established field that studies safety certification and convergence guarantees of complex software systems at design-time. However, the unknown future deployment environments of an autonomous system and the complexity of learning-based perception make the generalization of design-time verification to run-time problematic. In the face of this challenge, more attention is starting to focus on run-time monitoring of performance and reliability of perception systems with several trends emerging in the literature. This paper attempts to identify these trends and summarise the various approaches to the topic.
翻译:由于深层次的学习继续主导着所有最先进的计算机视觉任务,它正日益成为机器人感知的基本构件,这提出了关于学习感知系统的安全和可靠性的重要问题。有一个已确立的领域研究设计时复杂软件系统的安全认证和趋同保障。然而,一个自主系统的未来部署环境未知,以及基于学习感知的复杂性,使得将设计-时间核查普遍化为运行-时间的问题。面对这一挑战,人们开始更加注意对认知系统的性能和可靠性进行实时监测,并有文献中出现的一些趋势。本文件试图查明这些趋势,并总结处理这一专题的各种办法。