Bloom filter is a widely used classic data structure for approximate membership queries. Learned Bloom filters improve memory efficiency by leveraging machine learning, with the partitioned learned Bloom filter (PLBF) being among the most memory-efficient variants. However, PLBF suffers from high computational complexity during construction, specifically $O(N^3k)$, where $N$ and $k$ are hyperparameters. In this paper, we propose three methods: fast PLBF, fast PLBF++, and fast PLBF#, that reduce the construction complexity to $O(N^2k)$, $O(Nk \log N)$, and $O(Nk \log k)$, respectively. Fast PLBF preserves the original PLBF structure and memory efficiency. Although fast PLBF++ and fast PLBF# may have different structures, we theoretically prove they are equivalent to PLBF under ideal data distribution. Furthermore, we theoretically bound the difference in memory efficiency between PLBF and fast PLBF++ for non-ideal scenarios. Experiments on real-world datasets demonstrate that fast PLBF, fast PLBF++, and fast PLBF# are up to 233, 761, and 778 times faster to construct than original PLBF, respectively. Additionally, fast PLBF maintains the same data structure as PLBF, and fast PLBF++ and fast PLBF# achieve nearly identical memory efficiency.
翻译:暂无翻译