This paper proposes a new approach to achieve direct visual servoing (DVS) based on discrete orthogonal moments (DOM). DVS is conducted whereby the extraction of geometric primitives, matching and tracking steps in the conventional feature-based visual servoing pipeline can be bypassed. Although DVS enables highly precise positioning, and suffers from a small convergence domain and poor robustness, due to the high non-linearity of the cost function to be minimized and the presence of redundant data between visual features. To tackle these issues, we propose a generic and augmented framework to take DOM as visual features into consideration. Through taking Tchebichef, Krawtchouk and Hahn moments as examples, we not only present the strategies for adaptive adjusting the parameters and orders of the visual features, but also exhibit the analytical formulation of the associated interaction matrix. Simulations demonstrate the robustness and accuracy of our method, as well as the advantages over the state of the art. The real experiments have also been performed to validate the effectiveness of our approach.
翻译:本文提出了一种基于离散正交矩(DOM)实现直接视觉伺服(DVS)的新方法。与传统基于特征的视觉伺服流程中的几何原语提取,匹配和跟踪步骤相比,可以省略。虽然DVS可以实现高精度定位,但由于成本函数高度非线性,且在视觉特征之间存在冗余数据,因此其收敛范围较小且鲁棒性较差。为解决这些问题,我们提出了一种通用的增强框架,将DOM视为视觉特征。通过以Tchebichef、Krawtchouk和Hahn矩为例,我们不仅呈现了自适应调整视觉特征的参数和顺序的策略,而且展示了相关的交互矩阵的分析公式。仿真实验表明了我们方法的鲁棒性和准确性,以及相比现有技术的优势。进行了实际实验,以验证我们方法的有效性。