Contemporary question answering (QA) systems, including transformer-based architectures, suffer from increasing computational and model complexity which render them inefficient for real-world applications with limited resources. Further, training or even fine-tuning such models requires a vast amount of labeled data which is often not available for the task at hand. In this manuscript, we conduct a comprehensive analysis of the mentioned challenges and introduce suitable countermeasures. We propose a novel knowledge distillation (KD) approach to reduce the parameter and model complexity of a pre-trained BERT system and utilize multiple active learning (AL) strategies for immense reduction in annotation efforts. In particular, we demonstrate that our model achieves the performance of a 6-layer TinyBERT and DistilBERT, whilst using only 2% of their total parameters. Finally, by the integration of our AL approaches into the BERT framework, we show that state-of-the-art results on the SQuAD dataset can be achieved when we only use 20% of the training data.
翻译:现代回答问题系统,包括基于变压器的建筑,由于计算和模型复杂性的提高,使得这些模型在资源有限的情况下用于实际应用方面效率低下。此外,培训甚至微调这些模型需要大量标签数据,而目前的任务往往无法获得这些数据。在本手稿中,我们对所提到的挑战进行全面分析,并采用适当的对策。我们建议采用新的知识蒸馏方法,以减少预先培训的BERT系统的参数和模型复杂性,并利用多种积极学习战略,大幅度削减批注工作。特别是,我们证明我们的模型实现了6级TyBERT和DutilBERT的性能,同时只使用了其总参数的2%。最后,通过将我们的AL方法纳入BERT框架,我们表明,只要我们只使用20%的培训数据,就可以实现SQAD数据集的最新结果。