Multimodal Large Language Models (MLLMs) have seen rapid advances in recent years and are now being applied to visual document understanding tasks. They are expected to process a wide range of document images across languages, including Japanese. Understanding documents from images requires models to read what are written in them. Since some Japanese documents are written vertically, support for vertical writing is essential. However, research specifically focused on vertically written Japanese text remains limited. In this study, we evaluate the reading capability of existing MLLMs on vertically written Japanese text. First, we generate a synthetic Japanese OCR dataset by rendering Japanese texts into images, and use it for both model fine-tuning and evaluation. This dataset includes Japanese text in both horizontal and vertical writing. We also create an evaluation dataset sourced from the real-world document images containing vertically written Japanese text. Using these datasets, we demonstrate that the existing MLLMs perform worse on vertically written Japanese text than on horizontally written Japanese text. Furthermore, we show that training MLLMs on our synthesized Japanese OCR dataset results in improving the performance of models that previously could not handle vertical writing. The datasets and code are publicly available https://github.com/llm-jp/eval_vertical_ja.
翻译:近年来,多模态大语言模型(MLLMs)发展迅速,已开始应用于视觉文档理解任务。这些模型被期望能够处理包括日语在内的多种语言的文档图像。从图像中理解文档要求模型能够读取其中的文字内容。由于部分日语文档采用竖排书写方式,因此对竖排文本的支持至关重要。然而,专门针对竖排日文文本的研究仍然有限。本研究评估了现有MLLMs在竖排日文文本上的阅读能力。首先,我们通过将日文文本渲染成图像生成了一个合成的日语OCR数据集,并将其用于模型微调和评估。该数据集包含水平和竖排两种书写方向的日文文本。我们还创建了一个基于真实世界文档图像(包含竖排日文文本)的评估数据集。利用这些数据集,我们证明了现有MLLMs在竖排日文文本上的表现明显差于在横排日文文本上的表现。此外,我们展示了使用我们合成的日语OCR数据集对MLLMs进行训练,能够提升原本无法处理竖排文本的模型的性能。数据集和代码已公开提供:https://github.com/llm-jp/eval_vertical_ja。