Recent researches introduced fast, compact and efficient convolutional neural networks (CNNs) for offline handwritten Chinese character recognition (HCCR). However, many of them did not address the problem of network interpretability. We propose a new architecture of a deep CNN with high recognition performance which is capable of learning deep features for visualization. A special characteristic of our model is the bottleneck layers which enable us to retain its expressiveness while reducing the number of multiply-accumulate operations and the required storage. We introduce a modification of global weighted average pooling (GWAP) - global weighted output average pooling (GWOAP). This paper demonstrates how they allow us to calculate class activation maps (CAMs) in order to indicate the most relevant input character image regions used by our CNN to identify a certain class. Evaluating on the ICDAR-2013 offline HCCR competition dataset, we show that our model enables a relative 0.83% error reduction while having 49% fewer parameters and the same computational cost compared to the current state-of-the-art single-network method trained only on handwritten data. Our solution outperforms even recent residual learning approaches.
翻译:最近为中国脱线手写字符识别(HCHCR)引入了快速、紧凑和高效的进化神经网络(CNNs)的近期研究。然而,其中许多研究没有解决网络可解释性问题。我们建议建立一个具有高度识别性能的深重CNN新架构,能够学习深重特征进行可视化。我们模型的一个特殊特征是瓶颈层,它使我们能够保持其清晰度,同时减少倍积操作和所需存储的数量。我们引入了全球加权平均集合(GWAP)-全球加权平均产出集合(GWOAP)的修改。本文展示了它们如何允许我们计算班级激活地图(CAMs),以显示我们CNN用来识别某类最相关的输入字符图像区域。在离线的ICCR竞争数据集下对 ICDAR-2013 进行评估时,我们显示我们的模型能够相对减少0.83%的误差,同时减少了49%的参数和相同的计算成本,而目前仅对手写数据进行了培训的状态单网络方法。我们的解决方案甚至超越了最近的残余学习方法。