Forks in the Bitcoin network result from the natural competition in the blockchain's Proof-of-Work consensus protocol. Their frequency is a critical indicator for the efficiency of a distributed ledger as they can contribute to resource waste and network insecurity. We introduce a model for the estimation of natural fork rates in a network of heterogeneous miners as a function of their number, the distribution of hash rates and the block propagation time over the peer-to-peer infrastructure. Despite relatively simplistic assumptions, such as zero propagation delay within mining pools, the model predicts fork rates which are comparable with the empirical stale blocks rate. In the past decade, we observe a reduction in the number of mining pools approximately by a factor 3, and quantify its consequences for the fork rate, whilst showing the emergence of a truncated power-law distribution in hash rates, justified by a rich-get-richer effect constrained by global energy supply limits. We demonstrate, both empirically and with the aid of our quantitative model, that the ratio between the block propagation time and the mining time is a sufficiently accurate estimator of the fork rate, but also quantify its dependence on the heterogeneity of miner activities. We provide empirical and theoretical evidence that both hash rate concentration and lower block propagation time reduce fork rates in distributed ledgers. Our work introduces a robust mathematical setting for investigating power concentration and competition on a distributed network, for interpreting discrepancies in fork rates -- for example caused by selfish mining practices and asymmetric propagation times -- thus providing an effective tool for designing future and alternative scenarios for existing and new blockchain distributed mining systems.
翻译:暂无翻译