Despite several successes in document understanding, the practical task for long document understanding is largely under-explored due to several challenges in computation and how to efficiently absorb long multimodal input. Most current transformer-based approaches only deal with short documents and employ solely textual information for attention due to its prohibitive computation and memory limit. To address those issues in long document understanding, we explore different approaches in handling 1D and new 2D position-aware attention with essentially shortened context. Experimental results show that our proposed models have advantages for this task based on various evaluation metrics. Furthermore, our model makes changes only to the attention and thus can be easily adapted to any transformer-based architecture.
翻译:尽管在文件理解方面取得了若干成功,但由于在计算和如何有效吸收长期多式联运投入方面存在若干挑战,长期文件理解的实际任务在很大程度上没有得到充分探讨,目前大多数以变压器为基础的办法仅涉及短文件,并仅使用文字信息,因其令人望而生畏的计算和内存限制而引起注意。为了在文件理解方面解决这些问题,我们探索了处理1D和新的2D位置认知关注的不同方法,其背景基本缩短。实验结果表明,我们提出的模型在各种评价指标的基础上对这项任务具有优势。此外,我们的模型仅对关注加以改动,因此可以很容易地适应任何以变压器为基础的结构。