In a partitioned Bloom Filter the $m$ bit vector is split into $k$ disjoint $m/k$ sized parts, one per hash function. Contrary to hardware designs, where they prevail, software implementations mostly adopt standard Bloom filters, considering partitioned filters slightly worse, due to the slightly larger false positive rate (FPR). In this paper, by performing an in-depth analysis, first we show that the FPR advantage of standard Bloom filters is smaller than thought; more importantly, by studying the per-element FPR, we show that standard Bloom filters have weak spots in the domain: elements which will be tested as false positives much more frequently than expected. This is relevant in scenarios where an element is tested against many filters, e.g., in packet forwarding. Moreover, standard Bloom filters are prone to exhibit extremely weak spots if naive double hashing is used, something occurring in several, even mainstream, libraries. Partitioned Bloom filters exhibit a uniform distribution of the FPR over the domain and are robust to the naive use of double hashing, having no weak spots. Finally, by surveying several usages other than testing set membership, we point out the many advantages of having disjoint parts: they can be individually sampled, extracted, added or retired, leading to superior designs for, e.g., SIMD usage, size reduction, test of set disjointness, or duplicate detection in streams. Partitioned Bloom filters are better, and should replace the standard form, both in general purpose libraries and as the base for novel designs.
翻译:在一个隔开的Bloom过滤器中,美元比位矢量被分成1个零星功能。与硬件设计相反,软件实施大多采用标准的Bloom过滤器,因为隔开过滤器的偏差略小。在本文中,我们通过深入分析,首先显示标准Bloom过滤器的FPR优势比想象的要小;更重要的是,通过研究每分钟的FPR,我们显示标准Bloom过滤器在域内有薄弱的点:这些元素将被测试为假正数,比预期的要频繁得多。这与硬件设计相反,软件实施大多采用标准的Bloom过滤器,认为隔开过滤器稍差一些。此外,标准的Bloom过滤器如果使用天真双重,则容易出现非常弱的点,在多个图书馆,甚至主流的图书馆中出现。部分显示FPR过滤器在域内的分布是统一的,并且对于使用双向的、没有较弱的显示点。最后,在对一个要素进行测试时,通过内部的测试,通过内部的测试, 将一些共同的测试,可以显示一些比常规的模板的优势。