A few results from the first intentional stress test on a communal blockchain

I have covered the issue of increasing the Bitcoin block size a few times in the past:

Three days ago several individuals within the development community (and on reddit) — in order to test to see how the network would handle (and is impacted by) a large increase in transactions — went ahead and repeatedly sent transactions (via scrypts) onto the network.

Below are multiple graphs illustrating what this traffic looked like relative to “normal” days:

blockrio graphs

Source: blockr.io (over the past 30 days)

Above are two charts from Blockr.io illustrating the block sizes over time and average block fee over the past 30 days.

transaction fees in USD

Source: Blockchain.info (fees denominated in USD)

transactions per day

Source: Blockchain.info (number of transactions per day including popular addresses)

excluding chains

Source: Blockchain.info (excluding chains longer than 10)

Above are three charts from Blockchain.info covering the past year (365 days) activity related to: fees to miners, transactions to all addresses (including popular), transactions excluding chains longer than 10 (see Slicing data for an explanation).

statoshi clearing

Data Source: Statoshi.info / Image source (reddit thread)

Above is a screengrab from Statoshi.info (run by @lopp).  It illustrates the roughly 20 hour time period in which this stress test took place.

Results

There were multiple reddit threads that attempted to break down the findings, below are some of their comments with slight amendments

  • A peak of approximately 24,000 unconfirmed Bitcoin transactions occurred
  • Nearly 133,000 transactions were included in blocks during one day, a new all time high
  • Blocks became full starting at block 358596 at 23:38 UTC
  • And remained consistently full until block 358609 at 03:21 UTC
  • The majority of mining pools cap block size at 0.75 MB instead of 1 MB
  • Some transactions were “mysteriously” not broadcast until 2 hours post their actual broadcast time (Broadcast between 23- 24:00 UTC, shows 02:54 UTC)
  • The majority of low fee/minimum fee transactions required 3-4 hours for the first confirmation

Brute force fan fiction

While not necessarily a surprise, for approximately $3,000 an individual can effectively spam the network, filling up blocks and annoying users for several hours.  Because it became increasingly expensive for transactions to be included within blocks, the “attack” probably is not the most effective way to cause many transactions to be permanently slowed down.

Yet it does show that the Maginot Line narrative — that the only way to “attack” the network is to acquire hundreds of millions of dollars in hashing power to brute force the network — is just fan fiction.  A well-organized and minimally financed group of savvy internet users — not even professional hackers — can create headaches for settlement systems, payment processors or anyone else running time sensitive applications reliant on a public blockchain.

Thus, as Robert Sams pointed out a couple weeks ago: it would probably be financially irresponsible for a large organization like NASDAQ to use a communal blockchain — whose pseudonymous validators are not held contractually liable or accountable for transaction processing (or attacks thereof) — to clear and settle off-chain assets (Ryan Selkis briefly touched on a similar point last week as well).  Whether this kind of test convinces NASDAQ and others to rethink their pilot programs on a public blockchain is an open question.

Governance issues with “the commons”

Over the past 4-5 weeks there are probably well over a hundred reddit threads, blog posts and Bitcoin Talk forum posts related to increasing the block size.

Instead of rehashing all of the arguments here, the decision to increase block sizes seems to boil down to two things:

  1. Conflicts in governance (e.g., politics and special interest groups)
  2. Subjectivity in how many nodes represent “decentralization”

The first issue is much harder, perhaps impossible to solve because no one owns the network — it is a communal, public good.  Chronically lacking a clear and effective governance model, decisions are typically made based on: how many retweets someone gets, how many upvotes a poster receives, or increasingly, Six Degrees of Satoshi: how often Satoshi directly responded to your comments in the past.

We see this quite frequently with the same clique of developers using a type of argument from authority.  Perhaps they are correct and one person was left “in charge” by fiat — by Satoshi one spring morning in 2011.  Yet it was not Satoshi’s network to “give” in the first place — he was not the bonafide owner.  No one is, which presents a problem for any kind of de jure governance.1

gavin mike hearn

Source: reddit

The second issue, in terms of how many validating nodes are needed for decentralization, this is an issue that Vitalik Buterin, Jae Kwon and several others have been talking about for over six months, if not longer.

In short, as block sizes increase in size, fewer validating nodes will operate on the network due to a number of factors but largely related to the economic costs of running them (bandwidth is typically cited as the biggest consideration).  We see this empirically occur over the past 18 months on the Bitcoin blockchain (with validators dropping from over 13,000 in March 2014 to just under 6,000 today).

Appealing to amorphous social contracts

Social contracts historically fall apart due to their nebulous mandate and they also — non-governmental versions specifically — typically lack explicit enforcement mechanisms.

Bitcoin suffers from both.  There is no terms of service or explicit service agreement to the end user.  Nor is there a way to enforce an “ethos” onto a physically decentralized userbase.

Yet ironically several key developers are now appealing to a social contract to make decisions for how block sizes should and should not evolve.

Irrespective of what is decided on social media, there will ultimately be a solution that arises in the coming months, but not everyone will be happy.

How to solve this in the future?  What are other projects doing?

Tezos, if we come to believe that it is valuable or safe (because others are using it, or is scientifically verified), has a self-amending model which bakes in governance into the code itself.

Ethereum is also trying to create specific, technical ways for “explicit governance” to direct its evolution as it achieves certain milestones.  For instance, its developers plan to eventually transition the proof-of-work process into a proof-of-stake network (via a poorly marketed “bomb“).

Whether either of these projects is successful is another topic, but at least the developers recognize the governance issue as paramount to the ultimate “success” of the project.

Other projects in the distributed ledger arena, such as the “permissioned” ledgers I did a report (pdf) on earlier last month, also do not have this type of governance problem due to the fact that they each have a private sponsor (sometimes in the form of an NGO, others in the form of a company) where the buck finally, explicitly stops.

There may be non-technical ways to govern (via organizational structure), but Bitcoin’s model is both ad hoc and largely devolves into unproductive shouting matches.  Is this really how a financial system and series of products is best developed?  Probably not.

But this is a topic for political archaeologists to pour through in the coming years.

Other experts weigh in

Chun Wang, who is a member of the F2Pool operating team (F2Pool, also known as Discus Fish, is one of the largest mining pools), made the following comment two days ago on the Bitcoin development mailing list:

Hello. I am from F2Pool. We are currently mining the biggest blocks on
the network. So far top 100 biggest bitcoin blocks are all from us. We
do support bigger blocks and sooner rather than later. But we cannot
handle 20 MB blocks right now. I know most blocks would not be 20 MB
over night. But only if a small fraction of blocks more than 10 MB, it
could dramatically increase of our orphan rate, result of higher fee
to miners. Bad miners could attack us and the network with artificial
big blocks. As yhou know, other Chinese pools, AntPool, BW, they
produces ASIC chips and mining mostly with their own machines. They do
not care about a few percent of orphan increase as much as we do. They
would continue their zero fee policy. We would be the biggest loser.
As the exchanges had taught us, zero fee is not health to the network.
Also we have to redevelop our block broadcast logic. Server bandwidth
is a lot more expensive in China. And the Internet is slow. Currently
China has more than 50% of mining power, if block size increases, I
bet European and American pools could suffer more than us. We think
the max block size should be increased, but must be increased
smoothly, 2 MB first, and then after one or two years 4 MB, then 8 MB,
and so on. Thanks.

I reached out to Andrew Geyl (Organ of Corti) to see what was on his mind.  He independently concurred with LaruentMT, who suggested re-running the tests a few more times for more data:

The transaction “stress test” was well overdue. It’s impossible to understand exactly how increasing block sizes (or even reducing time between blocks) will affect transaction confirmations if we’re only using the network to capacity, and Testnet won’t be much use.

By ensuring that there were more transactions than could be confirmed, we understand a little more about the limits of the network’s transaction transmission capacity. As soon as I get access to relevant data I’ll be trying to determine what factors limited the rate of transactions per block per second.

I think this “stress test” should be run again at some point on a Sunday (when it will have least impact on network users) and – to account for variance in block making – for longer than just 8 hours. Maybe 24 hours? If we are are warned ahead of time, this might be more palatable to the bitcoin users. Think of it as preventative maintenance.

I also reached out to Dave Hudson, proprietor of HashingIt.com.  He has run a number of models over the past year; two notable posts still stick out: 7 Transactions Per Second? Really? and The Myth Of The Megabyte Bitcoin Block.  Below are his new comments:

I’d really like to have time to think about the stress test some more and to look at the numbers, but it demonstrates something that I’m pretty sure a number of people have considered before: 51% attacks are not the biggest cause for concern with Bitcoin; there are dramatically easier ways to attack the system than to build 350 PH/s of hardware.

The delays resulting from large numbers of TX’s sent to the network were entirely predictable (I did the sims months ago).

I doubt this is the only problem area. Consider (and this has been raised a lot in discussions over block size increases) that a lot of miners use the relay network. Attacking that, or shutting it down via some means would certainly set things backwards, especially if we do see larger block sizes.

Other attacks would be massive-scale Sybil attacks. I know there’s the whole argument that it can’t be done, but of course it can. It would be trivial to set up malware that turned 100s of thousands of compromised systems into Bitcoin nodes (even better if this could be done against something embedded where users don’t run malware detection).

It seems to me that the fact this hasn’t happened before is because those people interested in Bitcoin at the moment are more interested in seeing it useful than in bringing it down. When cybercriminals are extorting money in Bitcoin then they want to see it succeed too, but my guess is that if they could find some other equally anonymous way to get paid then we’d have seen some large-scale assaults, not just a few thousand extra TXs done as a thought experiment.

The problem here is that most software designers can build really good working systems. They can follow secure coding rules to ensure that their software doesn’t have resource leaks and network security vulnerabilities, but then they don’t consider any part of the system that might not be under their direct control. It’s the assumed-correct behaviour of the rest of the world that tends to be where major risks come in. Constructing a Maginot Line is a waste of time and money when the attacker bypasses it instead. In fact the perceived strengths of a defence usually lead to complacence. The stress test was a great example of this; huge amounts of time have been spent analyzing 51% attacks when this was probably the least likely attack even years ago. It’s essentially back to the crypto geek cartoon where the super-strong password is not cracked technologically, but instead by threatening its owner.

Despite what some entrepreneurs and venture capitalists have proclaimed — that there is a “scalability roadmap” — this is probably not the last time we look at this.

There are certainly proposed roadmaps that scale, to a point, but there are many trade offs. And it appears that some of the hosted wallet and payment processors that have publicly stated they are in favor of Gavin Andresen’s proposal are unaware of the impact that this type of block size increase has.  How it likely accelerates the reduction of nodes and how that likely creates a more centralized network (yet with the costs of decentralization).  Or maybe they are and simply do not think it is a real issue.  Perhaps they are correct.

One final comment — and this is tangential to the conversation above — is that by looking at the long chain exclusion chart we observe that the additional “stress test transactions” appear as normal unchained transactions.

This is interesting because it illustrates how easy it is to inflate the transaction volume metric making it less useful in measuring the health or adoption of the network.  Thus it is unlikely that some (all?) Bitprophets actually know what comprises transactions when they claim the Bitcoin network has reached “an all time high.”  Did they do forensics and slice the data?

See also: Creating a decentralised payment network: A study of Bitcoin by Jonathan Levin and Eclipse Attacks on Bitcoin’s Peer-to-Peer Network by Heilman et al.

  1. See Bitcoin faces a crossroads, needs an effective decision-making process by Arvind Narayanan []

One thought on “A few results from the first intentional stress test on a communal blockchain

Leave a Reply

Your email address will not be published. Required fields are marked *