CRDT[24] Sets as implemented in Riak[6] perform poorly for writes, both as cardinality grows, and for sets larger than 500KB[25]. Riak users wish to create high cardinality CRDT sets, and expect better than O(n) performance for individual insert and remove operations. By decomposing a CRDT set on disk, and employing delta-replication[2], we can achieve far better performance than just delta replication alone: relative to the size of causal metadata, not the cardinality of the set, and we can support sets that are 100s times the size of Riak sets, while still providing the same level of consistency. There is a trade-off in read performance but we expect it is mitigated by enabling queries on sets.
翻译:CRDT[24] 在Riak[6] 中执行的CRDT[6] 设置在写作上表现不佳,因为其基本性增长和500KB[25] 以上各组。 Riak用户希望创建高基性CRDT成套功能,并期望个人插入和删除操作的性能优于O(n)。通过在磁盘上将CRDT成套功能分解为CRDT,并使用三角复制[2],我们可以取得远比仅仅三角洲复制更好的性能:相对于因果元数据的规模,而不是这套集的主要性,我们可以支持100倍于Riak成套功能的成套功能,同时提供同样的一致性水平。阅读性能存在一种平衡,但我们期望通过允许对各组进行查询来减轻它。