Road detection or traversability analysis has been a key technique for a mobile robot to traverse complex off-road scenes. The problem has been mainly formulated in early works as a binary classification one, e.g. associating pixels with road or non-road labels. Whereas understanding scenes with fine-grained labels are needed for off-road robots, as scenes are very diverse, and the various mechanical performance of off-road robots may lead to different definitions of safe regions to traverse. How to define and annotate fine-grained labels to achieve meaningful scene understanding for a robot to traverse off-road is still an open question. This research proposes a contrastive learning based method. With a set of human-annotated anchor patches, a feature representation is learned to discriminate regions with different traversability, a method of fine-grained semantic segmentation and mapping is subsequently developed for off-road scene understanding. Experiments are conducted on a dataset of three driving segments that represent very diverse off-road scenes. An anchor accuracy of 89.8% is achieved by evaluating the matching with human-annotated image patches in cross-scene validation. Examined by associated 3D LiDAR data, the fine-grained segments of visual images are demonstrated to have different levels of toughness and terrain elevation, which represents their semantical meaningfulness. The resultant maps contain both fine-grained labels and confidence values, providing rich information to support a robot traversing complex off-road scenes.
翻译:道路探测或穿行分析是移动机器人改变复杂越野场景的关键技术。 问题主要在早期工作作为二进制分类法中提出, 例如将像素与道路或非道路标签挂钩。 需要理解路外机器人的场景, 需要使用细微标签的场景, 因为场面非常多样, 越野机器人的各种机械性能可能导致对安全区域的不同理解。 如何定义和注意精细的跨越野标签, 以便实现对机器人绕越越道路的有意义的场景理解, 仍然是一个尚未解决的问题。 这项研究提出了以对比性学习为基础的方法。 使用一套人文加注的锚补丁补丁补丁, 学会如何区分路外机器人的细微分解和绘图方法, 以便了解离岸安全区域的不同定义。 正在对代表着非常多样化离地场的三种驱动区段进行实验。 将89.8%的固定准确性精确度通过评估与高端图像的硬性平面平面平面平面图, 提供高清晰的平面图像平面。 。