Recently neural network based approaches to knowledge-intensive NLP tasks, such as question answering, started to rely heavily on the combination of neural retrievers and readers. Retrieval is typically performed over a large textual knowledge base (KB) which requires significant memory and compute resources, especially when scaled up. On HotpotQA we systematically investigate reducing the size of the KB index by means of dimensionality (sparse random projections, PCA, autoencoders) and numerical precision reduction. Our results show that PCA is an easy solution that requires very little data and is only slightly worse than autoencoders, which are less stable. All methods are sensitive to pre- and post-processing and data should always be centered and normalized both before and after dimension reduction. Finally, we show that it is possible to combine PCA with using 1bit per dimension. Overall we achieve (1) 100$\times$ compression with 75%, and (2) 24$\times$ compression with 92% original retrieval performance.
翻译:最近以神经网络为基础的知识密集型 NLP 任务(如回答问题) 开始严重依赖神经检索器和阅读器的组合。 检索通常在一个庞大的文本知识库(KB)中进行, 需要大量的记忆和计算资源, 特别是在规模扩大时。 在HotpotQA上, 我们系统地调查通过维度( 粗略随机预测、 CPA、 自动编码器) 和数字精确度降低来缩小 KB 指数的大小。 我们的结果表明, CCA 是一个容易的解决方案, 需要的数据非常少, 并且比自动编码器略微差。 所有方法对于预处理和后处理方法都十分敏感, 并且应当始终在尺寸缩小之前和之后都对数据进行集中和正常化。 最后, 我们表明, 能够将CPC 与 1 位的维结合。 总的来说, 我们实现了 (1) 100 美元压缩, 75% 和 (2) 24 美元压缩, 和 92% 的原始检索性能达到 92 。