Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT. To date, the source of their effectiveness remains unclear. Is it their ability to truly understand the meaning through modeling syntactic aspects? We answer this by manipulating the input order and position information in a way that destroys the natural sequence order of query and passage and shows that the model still achieves comparable performance. Overall, our results highlight that syntactic aspects do not play a critical role in the effectiveness of re-ranking with BERT. We point to other mechanisms such as query-passage cross-attention and richer embeddings that capture word meanings based on aggregated context regardless of the word order for being the main attributions for its superior performance.
翻译:尽管BM25等基于术语的方法在排序上提供了强有力的基线,但在某些条件下,这些方法以诸如BERT等经过事先训练的大型隐蔽语言模型为主。 到目前为止,其有效性的来源仍然不清楚。 它们是否有能力通过模拟合成方面真正理解其含义? 我们通过操纵输入顺序和定位信息来解决这个问题,从而破坏查询和通过等自然序列的顺序,并表明该模型仍然能取得可比较的绩效。 总的来说,我们的结果突出表明,合成方面在与BERT重新排序方面没有发挥关键作用。 我们指出,其他机制,例如查询-通行交叉意向和较富裕的嵌入式,它们抓住基于综合背景的字义含义,而不管其优异性表现的主要属性是词顺序。