This paper details a planner for visual exploration and search without prior map information. We leverage classical frontier based planning with both LiDAR and visual sensing and augment it a pixel-wise environment perception module that contextually labels points in the surroundings from wide Field of View 2D LiDAR scans. The goal of the perception module is to distinguish between `map' and `non-map' points in order to provide an informed prior on which to plan next best viewpoints. The robust map-free scan classifier used to label pixels in the robot's surroundings is trained from expert data collected using a simple cart platform equipped with a map-based classifier. We propose a novel utility function that accounts for traditional metrics like information gain and travel costs in addition to the contextual data found from the classifier. The resulting viewpoints encourage the robot to explore points unlikely to be permanent in the environment, leading the robot to locate objects of interest faster than several existing baseline algorithms. It is further validated in real-world experiments with single and multiple search objects with a Spot robot in two unseen environments. Videos of experiments, implementation details and open source code can be found at https://sites.google.com/view/lives-2024/home.
翻译:暂无翻译