In this paper, we introduce a fully convolutional network for the document layout analysis task. While state-of-the-art methods are using models pre-trained on natural scene images, our method Doc-UFCN relies on a U-shaped model trained from scratch for detecting objects from historical documents. We consider the line segmentation task and more generally the layout analysis problem as a pixel-wise classification task then our model outputs a pixel-labeling of the input images. We show that Doc-UFCN outperforms state-of-the-art methods on various datasets and also demonstrate that the pre-trained parts on natural scene images are not required to reach good results. In addition, we show that pre-training on multiple document datasets can improve the performances. We evaluate the models using various metrics to have a fair and complete comparison between the methods.
翻译:在本文中,我们为文件布局分析任务引入了一个完全进化的网络。 虽然最先进的方法正在使用在自然场景图像方面经过预先训练的模型, 我们的Doc- UFCN方法依靠从头到尾经过训练的U形模型来从历史文档中探测物体。 我们认为线段分割任务和更广义的布局分析问题是一种像素分类任务, 然后我们的模型输出了一个输入图像的像素标签。 我们显示, Doc-UFCN 的形状优于各种数据集的先进方法, 并且还表明, 自然场景图像中经过训练的部件不需要取得良好结果。 此外, 我们还表明, 在多个文件数据集上进行预先训练可以改善性能。 我们使用各种计量来评估模型,以便在方法之间进行公平和完整的比较。