Visual text recognition is undoubtedly one of the most extensively researched topics in computer vision. Great progress have been made to date, with the latest models starting to focus on the more practical "in-the-wild" setting. However, a salient problem still hinders practical deployment -- prior arts mostly struggle with recognising unseen (or rarely seen) character sequences. In this paper, we put forward a novel framework to specifically tackle this "unseen" problem. Our framework is iterative in nature, in that it utilises predicted knowledge of character sequences from a previous iteration, to augment the main network in improving the next prediction. Key to our success is a unique cross-modal variational autoencoder to act as a feedback module, which is trained with the presence of textual error distribution data. This module importantly translate a discrete predicted character space, to a continuous affine transformation parameter space used to condition the visual feature map at next iteration. Experiments on common datasets have shown competitive performance over state-of-the-arts under the conventional setting. Most importantly, under the new disjoint setup where train-test labels are mutually exclusive, ours offers the best performance thus showcasing the capability of generalising onto unseen words.
翻译:视觉文本识别无疑是计算机视野中最广泛研究的专题之一。 迄今已经取得了巨大的进步, 最新的模型开始侧重于更实用的“ 现成” 设置。 然而, 一个突出的问题仍然阻碍实际部署 -- 以前的艺术大多与识别未知( 或很少看到) 字符序列的斗争。 在本文中, 我们提出了一个新颖的框架, 以具体解决这个“ 看不见” 的问题。 我们的框架具有迭接性质, 它利用了从先前的迭接中预测的字符序列知识, 以扩大主要网络来改进下一个预测。 我们成功的关键是一个独特的跨模式的自动转换器, 以作为反馈模块, 接受文本错误分布数据的训练。 这个模块非常重要将一个独立预测的字符空间转化为一个连续的近距离转换参数空间, 用于在下次迭接时设定视觉特征地图。 在共同的数据集上进行的实验显示了常规设置下对状态艺术的竞争性性能。 最重要的是, 在新的脱节设置下, 培训测试标签是相互排斥的, 展示了我们最佳的状态。