Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.
翻译:摘要:捕捉到的场景中的文本信息对于场景的解释和决策具有重要作用。虽然存在能够成功检测和解释场景中复杂文本区域的方法,但据我们所知,以前不存在旨在修改图像中的文本信息的重要工作。直接在图像上编辑文本的能力具有几个优点,包括错误纠正、文本恢复和图像可重用性。在本文中,我们提出了一种在字符级别修改图像中的文本的方法。我们通过两个阶段来解决这个问题。首先,在观察到的正在修改的源字符中生成未观察到的字符(目标)。我们提出了两种不同的神经网络体系结构——(a)FANnet以实现与源字体的结构一致性和(b)Colornet以保持源颜色。接下来,我们用生成的字符替换源字符,并与相邻字符保持几何和视觉一致性。我们的方法作为修改图像中文本的统一平台。我们在COCO-Text和ICDAR数据集上以定性和定量的方式展示了我们方法的有效性。