Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels. Most conventional methods utilize the different cameras that capture the same scene from different viewpoints to allow regions to be removed and restored. In this paper, we propose an RGB-D image inpainting method using generative adversarial network, which does not require multiple cameras. Recently, an RGB image inpainting method has achieved outstanding results by employing a generative adversarial network. However, RGB inpainting methods aim to restore only the texture of the missing region and, therefore, does not recover geometric information (i.e, 3D structure of the scene). We expand conventional image inpainting method to RGB-D image inpainting to jointly restore the texture and geometry of missing regions from a pair of RGB and depth images. Inspired by other tasks that use RGB and depth images (e.g., semantic segmentation and object detection), we propose late fusion approach that exploits the advantage of RGB and depth information each other. The experimental results verify the effectiveness of our proposed method.
翻译:稀有的现实是一种技术,目的是从视频图像中清除物体,用可信的像素填充失踪区域的物体。大多数常规方法都使用从不同角度拍摄同一场景的不同照相机,以便清除和恢复区域。在本文中,我们提议采用RGB-D图像涂漆方法,使用基因对抗网络,不需要多部照相机。最近,RGB图像涂漆方法通过使用基因对抗网络取得了杰出的成果。然而,RGB油漆方法只旨在恢复失踪区域的纹理,因此,不恢复几何信息(即,场景的3D结构)。我们将常规图像涂漆方法扩大到RGB-D图像涂漆方法,以便共同恢复一组RGB和深度图像中缺失区域的纹理和几何测量。我们提出的方法的实效在于使用RGB和深度图像(例如,语义分割和对象探测)的其他任务,我们建议采用延迟融合方法,利用RGB和深度信息的优势。实验结果核查了我们拟议方法的有效性。