ASU Blockchain Research

Dash Partnership: Block Propagation Applied to Nakamoto Networks

by | Jun 22, 2018 |

Satoshi Nakamoto created the world’s first Nakamoto network called Bitcoin. Since that fateful day on January 3rd, 2009 the question “How big can this actually scale?” has been asked by supports and speculators alike. The Arizona State University Blockchain Research Lab was funded by the Dash Treasury Fund almost a year ago to answer that same question. Dragan Boscovic, Nakul Chawla, and Darren Tapp just published their white paper entitled “Block Propagation Applied to Nakamoto Networks” to answer two questions: How can the Dash network scale to support mass adoption? And what is the practical limit to Dash scaling?

The team focused on simulating the Dash network with three different types of block propagation protocols; traditional full block propagation, compact block propagation, and extreme thin (xthin) block propagation.

Simulation Details

  • Assumed eight major mining pools, all of which were connected to one another.
  • Consisted of six thousand nodes, each with 2GB of RAM (However the team didn’t simulate any hardware specs.)
  • To account for a confidence interval, trials were run long enough to generate 700 blocks.
  • The orphan rate was calculated by dividing the total amount of blocks from the number of orphan blocks created.

The results of these simulations for the three types of block propagation protocols are below:

Traditional Block Propagation Results

Blocksize

Orphan Rate

Median Block Propagation Time

Longest Orphan Chain

100 kB

0.36%

2.02 seconds

1

200 kB

0.19%

2.95 seconds

1

750 kB

2.52%

8.51 seconds

1

1 MB

2.27%

11.01 seconds

1

1.5 MB

5.68%

17.80 seconds

2

2 MB

7.53%

18.92 seconds

2

4 MB

16.60%

70.20 seconds

3

10 MB

91.40%

18,879.4 seconds

47

Compact and Xthin Block Propagation Results

 

Compact

Xthin

Blocksize

Orphan Rate

Orphan Rate

750 kB

0%

0%

1 MB

0%

0%

1.5 MB

0%

0%

2 MB

0%

0.14%

4 MB

0.80%

0.56%

10 MB

N/A

0.93%

According to Nakul Chawla, the greatest risk the team faced was having to replicate a network with 6000 nodes on a single machine. In the real network, different nodes have different processing times, but when the network is simulated on a single machine it’s hard to replicate that difference.

As a result of the traditional block propagation data, an increase in block propagation time as the size of the block increases is clear. By discarding the outliers of 100kB and 10MB tests, the data approximates an increase of 4.3% orphan rate per increase in MB size. As a result of the traditional block simulations, the team recommends the Dash network not scale beyond 5MB blocks while using this traditional method.

However, when we take a look at the compact and xthin block propagation simulations, it’s clear the Dash network can scale to much larger block sizes. Results from the data show a 20 fold decrease in orphaned block rate while looking at compact, and xthin simulations respectively. According to the xthin 10MB simulation, it’s clear that the Dash network can scale to that size without seeing a significant increase in orphaned blocks. Even though they didn’t run the 10MB compact block simulation, the team doesn’t expect that the network can handle 10MB compact blocks as well.

In conclusion, the team’s data shows that scaling to 10MB block sizes is feasible for the Dash network if they switch to xthin block propagation and feasible to scale between 6MB to 8MB sizes if they switch to compact block propagation. Furthermore, the data doesn’t identify a limit to block sizes when these methods are implemented, which means that scaling far beyond 10MB block sizes can be considered later on. In the future, Dragan, Darren, and Nakul want to continue doing simulations with larger blocks, as well as run different tests to understand how the graphene protocol would effect block propagation. Along with their propagation simulations, the team is considering to do more research into the operation of mining pools, the role of the multi-tier networks (masternodes) and GPU mining.

Link to published whitepaper