Transformer-based models have consistently produced substantial performance gains across a variety of NLP tasks, compared to shallow models. However, deep models are orders of magnitude more computationally expensive than shallow models, especially on tasks with large sequence lengths, such as document-level event detection. In this work, we attempt to bridge the performance gap between shallow and deep models on document-level event detection by using abstractive text summarization as an augmentation method. We augment the DocEE dataset by generating abstractive summaries of examples from low-resource classes. For classification, we use linear SVM with TF-IDF representations and RoBERTa-base. We use BART for zero-shot abstractive summarization, making our augmentation setup less resource-intensive compared to supervised fine-tuning. We experiment with four decoding methods for text generation, namely beam search, top-k sampling, top-p sampling, and contrastive search. Furthermore, we investigate the impact of using document titles as additional input for classification. Our results show that using the document title offers 2.04% and 3.19% absolute improvement in macro F1-score for linear SVM and RoBERTa, respectively. Augmentation via summarization further improves the performance of linear SVM by about 0.5%, varying slightly across decoding methods. Overall, our augmentation setup yields insufficient improvements for linear SVM compared to RoBERTa.
翻译:暂无翻译