Structural bias has recently been exploited for aspect sentiment triplet extraction (ASTE) and led to improved performance. On the other hand, it is recognized that explicitly incorporating structural bias would have a negative impact on efficiency, whereas pretrained language models (PLMs) can already capture implicit structures. Thus, a natural question arises: Is structural bias still a necessity in the context of PLMs? To answer the question, we propose to address the efficiency issues by using an adapter to integrate structural bias in the PLM and using a cheap-to-compute relative position structure in place of the syntactic dependency structure. Benchmarking evaluation is conducted on the SemEval datasets. The results show that our proposed structural adapter is beneficial to PLMs and achieves state-of-the-art performance over a range of strong baselines, yet with a light parameter demand and low latency. Meanwhile, we give rise to the concern that the current evaluation default with data of small scale is under-confident. Consequently, we release a large-scale dataset for ASTE. The results on the new dataset hint that the structural adapter is confidently effective and efficient to a large scale. Overall, we draw the conclusion that structural bias shall still be a necessity even with PLMs.
翻译:另一方面,人们认识到,明确纳入结构偏见会对效率产生消极影响,而预先培训的语言模型(PLMs)则可以捕捉隐含的结构结构。因此自然产生的一个问题是:在PLM中,结构性偏见是否仍有必要?为了回答这个问题,我们提议通过使用适应器将PLM中的结构偏见纳入其中,并用廉价到计算的相对位置结构来取代综合依赖结构,来解决效率问题。在SemEval数据集上进行了基准评估。结果显示,我们拟议的结构调整器对PLMs有利,并在一系列强大的基线中达到最先进的性能,但有轻度参数要求和低弹性。与此同时,我们担心目前对小规模数据的评估违约是不够自信的。因此,我们为ASTE发布了一个大规模的数据集。新的数据集显示,结构调整器对于SEMEval数据集来说是可靠的,并且能够实现最先进的性能性能性能性能,我们总地得出结论,这种结构上的偏向性势势势势必是巨大的。