Guided depth super-resolution (GDSR) is a hot topic in multi-modal image processing. The goal is to use high-resolution (HR) RGB images to provide extra information on edges and object contours, so that low-resolution depth maps can be upsampled to HR ones. To solve the issues of RGB texture over-transferred, cross-modal feature extraction difficulty and unclear working mechanism of modules in existing methods, we propose an advanced Discrete Cosine Transform Network (DCTNet), which is composed of four components. Firstly, the paired RGB/depth images are input into the semi-coupled feature extraction module. The shared convolution kernels extract the cross-modal common features, and the private kernels extract their unique features, respectively. Then the RGB features are input into the edge attention mechanism to highlight the edges useful for upsampling. Subsequently, in the Discrete Cosine Transform (DCT) module, where DCT is employed to solve the optimization problem designed for image domain GDSR. The solution is then extended to implement the multi-channel RGB/depth features upsampling, which increases the rationality of DCTNet, and is more flexible and effective than conventional methods. The final depth prediction is output by the reconstruction module. Numerous qualitative and quantitative experiments demonstrate the effectiveness of our method, which can generate accurate and HR depth maps, surpassing state-of-the-art methods. Meanwhile, the rationality of modules is also proved by ablation experiments.
翻译:制导超分辨率(GDSR)是多式图像处理中的一个热题。 目标是使用高分辨率( HR) RGB 图像, 以提供关于边缘和对象轮廓的额外信息, 以便低分辨率深度地图能够向HR 采集。 为了解决 RGB 纹理过度转移、 交叉模式特征提取困难和模块工作机制不清晰的问题, 我们提议建立一个由四个组成部分组成的高级分辨科斯丁变换网络( DDCTNet ) 。 首先, 配对 RGB/ 深度图像是半组合式特征提取模块的输入。 共享的调动内核内核提取出跨模式共同特征, 私人内核流分别提取其独特特征。 然后, RGB 特征成为当前关注机制的输入点, 突出现有方法的边缘。 随后, Decrecrete Cosine 变换( DCT) 模块( DCT ) 用于解决图像域域 GDCT 的优化问题。 然后, 共享的调动内核( RGB/ ) 解) 的精度模型的精度分析过程将扩展到多式的精度, 的精度的精度的精度的精度的精度的精度分析方法将显示。