2015-02-06

Over the past couple of months there has been a number of discussions revolving around increasing the Bitcoin block size from its current 1 MB limit to 20 MB. One such plan is Gavin Andresen’s proposal (this is not to single him out as there are others with similar proposals). The code change itself is trivial, as it can simply be changed to any arbitrary number in a couple of keystrokes (for instance, see Vitalik Buterin discuss this at 14:15).

However, getting the majority of validating nodes, miners and the rest of the ecosystem on-board in a timely fashion is a very non-trivial matter.

Recall that, as illustrated by Organ of Corti and Dave Hudson, the average block size has increased over the past year to the point where we will likely max out at around 3 transactions per second with the current 1 MB limit. Since many of the investors, developers and entrepreneurs in this space would like to make Bitcoin ‘competitive’ to other payment platforms such as Visa, according to their view, this number eventually needs to increase by several orders of magnitude.

Fundamentally there are two trade-offs in block size economics:

Keeping a 1 MB block size requires higher fees to end-users but results in a more decentralized network

With a larger, 20 MB block size, fees are (temporarily) subsidized to end-users but with fewer validating nodes on the network

A quick explanation of both:

Retaining a 1 MB block size ultimately results in higher transaction fees because block space is scarce and miners will only process and include transactions based on market-based prioritization rates (e.g., pay higher to be included faster). While this would likely mean the end of certain types of transactions (such as “long chain” transactions) as well as fee-less transactions which have disproportionally increased the size of the blockchain over the past six months relative to actual commerce, simultaneously this design decision would have the effect of retaining some nominal decentralization as the increase in blockchain size would remain relatively linear and thus the blockchain could be validated by several thousand nodes as it is done today without (much) additional cost.

In early March 2014, there were approximately 10,000 nodes however over the past year there has been a decline by roughly 1/3. What does this distribution of roughly 6,400 current nodes look like?



Recall that the original value proposition of the Bitcoin blockchain was its decentralized characteristic, thus the more miners and validation nodes that are geographically distributed, the less prone the network is to single-points of failure. Furthermore, while many people call the various artifacts that have increased the blockchain size “bloat,” because this is a public good and no one owns it, it is imprecise to do so (e.g., one man’s 80 byte “trash” OP_RETURN is another man’s data storing “treasure“).

Whether consumers are sensitive to this change in fees is another matter due to elastic demand, they may simply switch over substitute goods (e.g., competing chains and ledgers).  What does this mean exactly?

An increase to a 20 MB block size would likely continue the same “low” fee (donation) structure practiced and promoted today as there is purportedly more room for non-priority transactions. The known challenge however is that if 20 MB blocks became “filled,” this would require a corresponding increase in bandwidth and disk space which would require more costs to be borne by the validating nodes which are already operating as public goods. That is to say, a blockchain that increased in size by 20 MB every 10 minutes would fill over 1 terabyte a year which would create additional costs for participants and likely reduce the amount of verification nodes and therefore reduce the decentralization of the network.

The other challenge to Andresen’s plan is, that because the prioritization of transactions would still not be adjusting towards via fees to miners, this would in turn continue the status quo in which miners continue to largely rely on seigniorage to operate. This is an unhealthy trend as it stalls the transition from block rewards to fees which was the narrative stated since day one on October 31, 2008 (see section 6).

What will happen?

It is difficult to predict what exactly will happen as the key actors in this space are still deciding what to use social capital on.

Gavin Andresen, as recently as two weeks ago, stated that most of the large payment processors, exchanges and other service companies are on-board with his plan. Furthermore, others in the community have (likely erroneously) found correlation between market cap and transaction volume yet as we know, correlation does not actually imply causation. Similarly, ‘Death and Taxes’ recently presented a narrative reinforcing Andresen’s view yet for some reason glossed over the all-important miners perspective. Others, such as in the ideological wing personified by Mircea Popescu claim that they will fight this effort with an actual attack.

Irrespective as to what size a block is increased to, it will likely create at least a temporary fork as validating nodes need to upgrade and they are not being compensated for storage and traffic (Andresen’s plan is to “future proof” the protocol such that the 20 MB change is included in a patch this year but isn’t “turned on” until needed later on). There is at least one open question: what is the minimal amount of full nodes, that are required for network to operate within current trust/security model?  Unlike miners, their value to the system is hard to measure.

What the experts say

While the field is young, one expert in this space is Jonathan Levin who modeled network propagation in his masters thesis. I reached out to him and in his view:

I think that the 20mb proposal is untenable given the current way that blocks are propagated around the Bitcoin network. The Bitcoin network and specifically the Bitcoin miners use a gossip network to relay blocks to each other. That means that as the size of the block increases, the time that it takes to spread around the network also increases linearly. We have seen this first in the work of Decker and Wattenhofer as well as my own work.

The problem is that the increased time that blocks take to propagate around the network increase the probability of orphan races between different mining pools. If you create blocks that are 20mb and a competing pool is creating blocks under 1mb or even empty ones, they have a higher expected return per hash. This is because you would expect your blocks to lose out to smaller blocks in an orphan race if both are found in quick succession. Now we can argue that miners will continue to create large blocks out of altruism but if we continue to increase the size of the blocks without greater utilisation of better block relaying protocols we risk breaking this equilibrium and miners resorting to nasty strategies like creating empty blocks which suit no one.

I also spoke with several other professionals in this space.

For instance, I spoke with Atif Nazir, co-founder of Block.io and an instructor at Blockchain University. According to him:

On the one hand, increasing block sizes, as you say, may result in lower transaction fee requirements. However, if the transaction fees actually are lowered by, say, 1000x what they are now (0.00001 is the minimum accepted by the reference client), this will lower the cost of “institutional attacks” on the Bitcoin infrastructure, where an attacker can push 1000 transactions for an erstwhile cost of 1. The attack will basically be “make infrastructure expensive to run for the average joe, drive them towards centralized infrastructure services that run APIs, Blockchain Explorers, etc.” It is good for business, bad for the decentralization of the network in the near term.

We’ve seen something like this occur on the Dogecoin Network in the past few months, where one user or a group of individuals were pushing transactions with 0 transaction fees. These transactions were accepted as valid by the Dogecoin reference clients, and as a result, caused bandwidth consumption hikes for the dorm-room nodes, which populate most of the current network(s). The resulting change by the Dogecoin Core team was to add a fee of 1.0 DOGE for every transaction, which isn’t yet mandatory, but is on its way there. The dorm-room nodes, however, are already on the decline in both Bitcoin and Dogecoin due to the increasing size of the Blockchain, and the bandwidth consumed by them.

Increasing the Block sizes sounds like a good idea for the number of transactions flowing on the network, but in the near term it will drive a lot of the nodes out of the system because of CPU/bandwidth/disk IO hikes. Increasing the Block sizes will definitely increase infrastructure costs, driving more users towards centralized places that can afford to host API services for the Blockchain. However, given this crunch on the average joe Bitcoin nodes, this will lead to a more concentrated effort towards “pick what you need” style nodes (say, SPV). Again, in the near term, the number of “full nodes” on the network will dwindle, but as more companies come into the ecosystem, this number will inevitably rise.

Bitcoin as a whole is headed towards a network where most nodes don’t actually host the entire Blockchain — increasing the block size will only accelerate this change. This will lead to more innovative solutions, and who knows, we might find a way for nodes to communicate cost-effectively rather than the current “gossip”-style protocol we use, where you inform all your peers when you hear about a new transaction. The community can very dynamic, and I think the longer term outlook for the network looks good regardless. Bitcoin is powered by nerds like you and I, and we tend to find solutions where others walk away.

Nazir raises an interesting point in terms of a hypothetical time horizon for when a transition (between short term and long term) could take place.

Another individual who has done a lot of modeling of incentives, mining and block sizes is Dave Hudson, a software developer who also writes at HashingIt). According to him:

Changes to the distributed consensus software within Bitcoin raise really interesting questions about the evolution of cryptocurrencies and how truly decentralised they really are. With each change we’re actually seeing something interesting happen where the ongoing participants in the system all effectively agree to move to a new system: BTC becomes BTC’ becomes BTC”, etc. We might be calling BTC” Bitcoin but any legacy nodes running BTC’ or BTC also think they’re Bitcoin too. At some point in time something happens and the various systems start to disagree about what is or isn’t valid and those could be very subtle. Imagine for example that BTC” introduced a subtle change that inadvertently made some of Satoshi’s coins unspendable; nobody might ever know until someone with Satoshi’s keys tries to spend their Bitcoins. Arguably it might already have happened as the result of some random compiler bug (not a fault in the Bitcoin-core code, but a bug in the way that’s transformed into something that runs on the node CPUs).

Clearly the Bitcoin-core developers try very hard to ensure that this sort of thing doesn’t happen by accident, but in order to sustain all participants holdings within the system they really do have to try to ensure that every node moves from BTC to BTC’ to BTC”, etc. In order to do this they essentially have to persuade everyone to migrate to each new version within some specific time window.

Now let’s imagine for a moment that instead of miners all tending to mine through centralised infrastructure (mining pools), that we really did have true decentralisation and had hundreds of thousands, or millions, of nodes that all did their own transaction selection and mining. Perhaps they’re even embedded into things that their users didn’t even realise were contributing to mining. At this scale it would probably be almost impossible to get them all to move to adopt a planned fork. We would either see the protocol totally stagnate or else we would see potentially very significant forks occurring.

In practice the system holds together in a cohesive way because, in the absence of a precise protocol spec, the core devs try to ensure that everyone uses the same consensus-critical software, runs it on the same sorts of hardware that all do things the same way and with some reasonably consistent set of capabilities.

It’s seems a slight irony that one of the key factors in the successful maintaining and sustaining of the Bitcoin network is continual centralised actions, and that things aren’t actually massively decentralised.

This last point is intriguing in that a lot of the software in this space is still relatively homogeneous and that if a network were to scale to become as distributed (or decentralized) as is hoped while simultaneously incorporating many nodes and clients, then it is likely that a diverse set (or lackthereof) of developer tools could prevent or perhaps even incentivize attacks (e.g., if every actor in the ecosystem uses the same client then that could create a vulnerability to the network).

In an exchange with Peter Todd, a contributor and developer on Bitcoin core and other related protocols (such as ClearingHouse), he framed the issue:

At the recent O’Reilly Media conference basically I pointed out that because this is an externality/ /tragedy-of-the-commons problem we may have to see Bitcoin fail due to a blocksize increase first before the community actually groks the issue. Personally I’m inclined to not oppose a blocksize increase on this grounds – Bitcoin failing cleanly is probably good for my interests.

In terms of “getting people on board” – to a degree you inherently can’t do this, because a blocksize increase will inherently exclude people from the system. See for example the discussion between Greg Maxwell and Gavin Andresen several weeks ago on the #bitcoin-dev IRC channel.

I spoke with Robert Sams, co-founder of a fintech startup who has previously written analysis covering the marginal costs of Bitcoin-like systems. In his view:

Levin’s point about network propagation is key: mining a larger block has a lower expected return b/c of the increased probability of losing out to a smaller block in an orphan race.

Now all of what you argue is a totally sound economic conjecture based on the assumption of distributed mining economics. Miners include tx until the marginal cost of tx inclusion (opportunity cost of including a different tx when up against the block limit + block propagation effect) equals marginal revenue (the fee).

However, for me the crucial economic force here is what happens to fees under concentrated mining. The logic changes from the marginal costs equals the marginal revenue logic in the above distributed case to a more strategic, oligopolistic pricing dynamic. What I mean is this. In the distributed case, whether or not a given miner includes a given tx has no material effect on the expected confirmation time for the tx sender. But in the concentrated mining scenario it does. If some pool is 35% of the network, the decision by that pool to not include the tx will materially increase the confirmation time of that transaction. So miners can extract more of the value that a tx senders place on fast confirmation times by setting their own minimum fee threshold, knowing that this threshold will over time effect the fees that tx senders include. What that optimal threshold is depends upon how much senders are willing to pay for faster tx confirmation times. Who knows what that is, but the implication is clear: under concentrated mining, fees levels will start to reflect more what tx senders are willing to pay rather than the cost to miners of including them.

So when you cast the blocksize issue in this concentrated mining context, it’s really not clear what will happen. My bets are that fees will go up and we won’t have to worry about blocksizes because higher fees will act as a break on adoption.

If block sizes are increased we will learn a lot about the dynamics of the community, the interplay between incentives such as fees and seigniorage have for on-boarding (and off-boarding) miners as well as how price sensitive users are in this space.

Ultimately it is the miners who decide as they are the entities creating Sybil protection and preventing double-spend attacks (or in some cases, providing that service). Or as Raffael Danielli, a quantitative research analyst at ING explained:

In theory, fee rewards should incentivize miners to include as many transactions as possible. In reality though fee rewards are a tiny percentage of block rewards and the risk-rewards ratio simply doesn’t add up at the moment (risking a (almost) sure 25 BTC payoff to get a potential say 25.1 BTC). What are the rational incentives for miners to upgrade and actually fill 20mb blocks? At the moment there are none that I am aware of. If there are no incentives for miners then this is not going to happen. Period. There is no altruism when it comes mining and anyone who bets on it is in for a rude awakening.

But this crosses over into the new field of cryptoeconomics which is a topic for another day.

[Thanks to Anton Bolotinksy for his thoughts on measuring the value of nodes within the system.]

Send to Kindle

Show more