Given a polygonal workspace $W$, a depth sensor placed at point $p=(x,y)$ inside $W$ and oriented in direction $\theta$ measures the distance $d=h(x,y,\theta)$ between $p$ and the closest point on the boundary of $W$ along a ray emanating from $p$ in direction $\theta$. We study the following problem: For a polygon $W$ with $n$ vertices, possibly with holes, preprocess it such that given a query real value $d> 0$, one can efficiently compute the preimage $h^{-1}(d) \subset W\times \mathbb{S}^1$, namely determine all the possible poses (positions and orientations) of a depth sensor placed in $W$ that would yield the reading $d$, in an output-sensitive fashion. We describe such an output-sensitive data structure, which answers queries in $O(k \log n)$ time, where $k$ is the number of vertices and maximal arcs of low degree algebraic curves constituting the answer. We also obtain analogous results for the more useful case (narrowing down the set of possible poses), where the sensor performs two antipodal depth measurements from the same point in $W$. We then describe simpler data structures for the same two problems, where we employ a decomposition of $W\times \mathbb{S}^1$, and where the query time is output-sensitive relative to this decomposition. Our software implementation for these latter structures is open source and publicly available. Although robot localization is often carried out by exploring the full visibility polygon of a sensor placed at a point of the environment, the approach that we propose here opens the door to sufficing with only few depth measurements, which is advantageous as it allows for usage of inexpensive sensors and could also lead to savings in storage and communication costs.
翻译:暂无翻译