This paper proposes a new approach to achieve direct visual servoing (DVS) based on discrete orthogonal moments (DOMs). DVS is performed in such a way that the extraction of geometric primitives, matching, and tracking steps in the conventional feature-based visual servoing pipeline can be bypassed. Although DVS enables highly precise positioning, it suffers from a limited convergence domain and poor robustness due to the extreme nonlinearity of the cost function to be minimized and the presence of redundant data between visual features. To tackle these issues, we propose a generic and augmented framework that considers DOMs as visual features. By using the Tchebichef, Krawtchouk, and Hahn moments as examples, we not only present the strategies for adaptively tuning the parameters and order of the visual features but also exhibit an analytical formulation of the associated interaction matrix. Simulations demonstrate the robustness and accuracy of our approach, as well as its advantages over the state-of-the-art. Real-world experiments have also been performed to validate the effectiveness of our approach.
翻译:暂无翻译