Many concurrent dictionary implementations are designed and optimized for read-mostly workloads with uniformly distributed keys, and often perform poorly on update-heavy workloads. In this work, we first present a concurrent (a,b)-tree, the OCC-ABtree, which outperforms its fastest competitor by up to 2x on uniform update-heavy workloads, and is competitive on other workloads. We then turn our attention to skewed update-heavy workloads (which feature many inserts/deletes on the same key) and introduce the Elim-ABtree, which uses a new optimization called publishing elimination. In publishing elimination, concurrent inserts and deletes to a key are reordered to eliminate them. This reduces the number of writes in the data structure. The Elim-ABtree achieves up to 2.5x the performance of its fastest competitor (including the OCC-ABtree). The OCC-ABtree and Elim-ABtree are linearizable. We also introduce durable linearizable versions (for systems with Intel Optane DCPMM non-volatile main memory) that are nearly as fast.
翻译:许多并行字典的实施经过设计和优化,主要用于以统一分布的钥匙进行阅读的工作量,而且往往在更新工作量方面表现不佳。在这项工作中,我们首先提出一个同时(a,b)-树(OCC-ABtree),该树比其最大竞争者多出2x,在统一更新工作量方面比其最快的竞争者多出2x,并在其他工作量上具有竞争力。然后我们把注意力转向偏斜的更新工作量重(在同一钥匙上有许多插入/删除),并引入埃利姆-ABtree(Elim-ABtree),该树使用新的优化,称为消除出版。在出版中,重新排序将同时插入和删除到密钥,以消灭它们。这减少了数据结构中的写作数量。埃利姆-ABtree达到其最快竞争者(包括OCC-ABtree)的性能达2.5x。OCC-ABtree和Elim-ABtree是可线性可连化的。我们还引入耐性版本(Intel Optan-Ottane DCPM-N-VOLALM)系统几乎可以连成主记忆。