Document layout analysis is essential for downstream tasks such as information retrieval, extraction, OCR, and digitization. However, existing large-scale datasets like PubLayNet and DocBank lack fine-grained region labels and multilingual diversity, making them insufficient for representing complex document layouts. In contrast, human-annotated datasets such as M6Doc and D4LA offer richer labels and greater domain diversity, but are too small to train robust models and lack adequate multilingual coverage. This gap is especially pronounced for Indic documents, which encompass diverse scripts yet remain underrepresented in current datasets, further limiting progress in this space. To address these shortcomings, we introduce IndicDLP, a large-scale foundational document layout dataset spanning 11 representative Indic languages alongside English and 12 common document domains. Additionally, we curate UED-mini, a dataset derived from DocLayNet and M6Doc, to enhance pretraining and provide a solid foundation for Indic layout models. Our experiments demonstrate that fine-tuning existing English models on IndicDLP significantly boosts performance, validating its effectiveness. Moreover, models trained on IndicDLP generalize well beyond Indic layouts, making it a valuable resource for document digitization. This work bridges gaps in scale, diversity, and annotation granularity, driving inclusive and efficient document understanding.
翻译:文档版面分析对于信息检索、信息抽取、光学字符识别(OCR)及数字化等下游任务至关重要。然而,现有的大规模数据集(如PubLayNet和DocBank)缺乏细粒度的区域标签与多语言多样性,不足以表征复杂的文档版面结构。相比之下,人工标注的数据集(如M6Doc和D4LA)提供了更丰富的标签和更广泛的领域多样性,但其规模过小,难以训练鲁棒的模型,且多语言覆盖不足。这一差距在印度语言文档中尤为突出:这些文档包含多样的文字体系,却在现有数据集中代表性不足,进一步限制了该领域的发展。为弥补这些不足,我们提出了IndicDLP——一个大规模的基础文档版面数据集,涵盖11种代表性印度语言及英语,覆盖12个常见文档领域。此外,我们还构建了UED-mini数据集(源自DocLayNet和M6Doc),以增强预训练效果,并为印度语言版面模型提供坚实基础。实验表明,在IndicDLP上对现有英语模型进行微调可显著提升性能,验证了该数据集的有效性。此外,基于IndicDLP训练的模型能够良好地泛化至非印度语言版面,使其成为文档数字化的宝贵资源。本工作弥合了规模、多样性与标注粒度之间的差距,推动了包容且高效的文档理解研究。