This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in $\mathbb{R}^3$ is used for translation estimation and real spherical harmonics in $SO(3)$ are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm.
翻译:本文提出一种直接进行三维视觉伺服的方案,使用光谱域中的视觉信息自动对准点云(或物体)。具体来说,我们提出了一种用于在参考点云和目标点云之间估计全局转换的对齐方法,该方法使用谐波域数据分析。在 $\mathbb{R}^3$ 中使用三维离散傅里叶变换(DFT)进行平移估计,使用 $SO(3)$ 中的实球面谐波进行旋转估计。这种方法使我们能够推导出一个具有 6 自由度的解耦式视觉伺服控制器。然后,我们展示了如何将这种方法用作机器人手臂的控制器,执行定位任务。与现有的 3D 视觉伺服方法不同,我们的方法适用于部分点云和初始位置与期望位置之间存在大的变化的情况。此外,将变换估计的光谱数据(而不是空间数据)使我们的方法能够抵抗传感器诱导噪声和部分遮挡。我们的方法已经在装备有深度相机的机械臂上成功验证。