Our work aimed at experimentally assessing the benefits of model ensembling within the context of neural methods for passage reranking. Starting from relatively standard neural models, we use a previous technique named Fast Geometric Ensembling to generate multiple model instances from particular training schedules, then focusing or attention on different types of approaches for combining the results from the multiple model instances (e.g., averaging the ranking scores, using fusion methods from the IR literature, or using supervised learning-to-rank). Tests with the MS-MARCO dataset show that model ensembling can indeed benefit the ranking quality, particularly with supervised learning-to-rank although also with unsupervised rank aggregation.
翻译:我们的工作旨在实验性地评估模型组合在神经元件转换方法背景下重新排位的好处。从相对标准的神经模型开始,我们使用以前称为快速几何组合的技术从特定培训时间表中产生多种模型实例,然后集中或注意不同种类的方法,将多个模型实例的结果合并起来(例如,平均分数,使用IR文献中的聚合方法,或使用受监督的学习到排序方法 ) 。 与MS-MARCO数据集进行的测试表明,模型组合确实能够有利于排名质量,特别是监督的排序学习,但也有不受监督的排名组合。