Existing learned indexes (e.g., RMI, ALEX, PGM) optimize the internal regressor of each node, not the overall structure such as index height, the size of each layer, etc. In this paper, we share our recent findings that we can achieve significantly faster lookup speed by optimizing the structure as well as internal regressors. Specifically, our approach (called AirIndex) expresses the end-to-end lookup time as a novel objective function, and searches for optimal design decisions using a purpose-built optimizer. In our experiments with state-of-the-art methods, AirIndex achieves 3.3x-7.7x faster lookup for the data stored on local SSD, and 1.4x-3.0x faster lookup for the data on Azure Cloud Storage.
翻译:在本文中,我们分享了我们最近的调查结果,即通过优化结构以及内部递减器,我们能够大大加快查看速度。具体地说,我们的方法(称为AirIndex)将端到端的搜索时间作为一种新的客观功能,并使用一个专门设计的优化器来搜索最佳设计决定。 在我们以最新技术方法进行的实验中,AirIndex实现了本地SSD存储数据的3.3x-7.7x更快的搜索,以及Azure Cloud 存储数据的1.4x-3.0x更快的搜索。