The focus of this contribution is on camera simulation as it comes into play in simulating autonomous robots for their virtual prototyping. We propose a camera model validation methodology based on the performance of a perception algorithm and the context in which the performance is measured. This approach is different than traditional validation of synthetic images, which is often done at a pixel or feature level, and tends to require matching pairs of synthetic and real images. Due to the high cost and constraints of acquiring paired images, the proposed approach is based on datasets that are not necessarily paired. Within a real and a simulated dataset, A and B, respectively, we find subsets Ac and Bc of similar content and judge, statistically, the perception algorithm's response to these similar subsets. This validation approach obtains a statistical measure of performance similarity, as well as a measure of similarity between the content of A and B. The methodology is demonstrated using images generated with Chrono::Sensor and a scaled autonomous vehicle, using an object detector as the perception algorithm. The results demonstrate the ability to quantify (i) differences between simulated and real data; (ii) the propensity of training methods to mitigate the sim-to-real gap; and (iii) the context overlap between two datasets.
翻译:这一贡献的重点是照相模拟,因为它在模拟自动机器人以模拟其虚拟原型时可以模拟其虚拟原型。我们根据感知算法和性能测量环境的性能提出一个相机模型验证方法。这种方法不同于合成图像的传统验证方法,合成图像通常在像素或特性水平上进行,往往需要合成图像和真实图像的配对配对。由于获取配对图像的成本和限制很高,拟议方法以不一定对齐的数据集为基础。在真实和模拟数据集A和B中,我们分别发现类似内容的Ac和Bc子集,并从统计角度判断感知算法对这些类似子集的反应。这种验证方法获得一种对性能相似性的统计测量,以及衡量A和B内容的相似性。该方法使用与Chrono生成的图像加以证明:传感器和缩放自动飞行器,使用对象探测器作为感知算法。结果显示,能够量化(一) 模拟和真实数据背景之间的差异;(二) 数据与真实性分析方法之间的差异。