Recent researches introduced fast, compact and efficient convolutional neural networks (CNNs) for offline handwritten Chinese character recognition (HCCR). However, many of them did not address the problem of the network interpretability. We propose a new architecture of a deep CNN with a high recognition performance which is capable of learning deep features for visualization. A special characteristic of our model is the bottleneck layers which enable us to retain its expressiveness while reducing the number of multiply-accumulate operations and the required storage. We introduce a modification of global weighted average pooling (GWAP) - global weighted output average pooling (GWOAP). This paper demonstrates how they allow us to calculate class activation maps (CAMs) in order to indicate the most relevant input character image regions used by our CNN to identify a certain class. Evaluating on the ICDAR-2013 offline HCCR competition dataset, we show that our model enables a relative 0.83% error reduction having 49% fewer parameters and the same computational cost compared to the current state-of-the-art single-network method trained only on handwritten data. Our solution outperforms even recent residual learning approaches.
翻译:最近为中国脱线手写字符识别(HCHCR)引入了快速、紧凑和高效的进化神经网络(CNNs)的近期研究。然而,其中许多研究没有解决网络可解释性问题。我们建议建立一个具有高度识别性能的深重CNN新架构,能够学习深层特征进行直观化。我们模型的一个特殊特征是瓶颈层,它使我们能够保持其清晰度,同时减少倍积操作和所需存储的数量。我们引入了全球加权平均集合(GWAP)-全球加权平均产出集合(GWOAP)的修改。本文展示了它们如何允许我们计算班级激活图(CAMs),以显示我们CNN用来识别某类最相关的输入性图像区域。在离线 HCCR 竞争数据集下对 ICDAR- 2013 评估时,我们展示了我们的模型能够相对0.83%的误差减少49%的参数和相同的计算成本,而目前仅用手写数据培训的状态单网络方法(GWOAP) 。我们的解决方案甚至超越了最近的残余学习方法。