The lack of scalability is the biggest obstacle that the blockchain space is facing right now. In this two-parter, we will be looking at the various blockchain scalability techniques developers are working on to fix this problem. Today, in part 1, we will be looking at Layer-1 scalability techniques.
At a glance, these techniques are as follows:
Increasing Block Size:When you increase the size of individual blocks, you can clear transactions faster. However, bulkier blocks can be difficult to mine, which leads to more centralization.
Reducing Block Time: Reducing the block time and mining at a faster rate can increase chain speed. However, it could lead to more orphan blocks (aka perfect legit blocks that must be discarded by the network).
Consensus Algorithms: Choosing the consensus algorithm which gives you sufficient speed without sacrificing decentralization could be crucial.
Sharding: In sharding, we split the entire blockchain state (aka all the data stored within the blockchain) into smaller chunks called “shards.” The whole network can process these shards parallelly and execute operations faster.
If you want to know more, then click the “Savvy” button above. If you want a deeper dive, click on “Genius.” Otherwise, stay tuned for part 2 where we will explore Layer-2 Scalability Techniques.
Blockchain Scalability Techniques: Layer 1
Scalability has long been identified as the “holy grail” of crypto. As a result, developers worldwide are working on various systems and techniques to improve overall scalability. These techniques can be divided broadly into the following:
Layer-1 techniques.
Layer-2 techniques.
Today, we will be focussing on Layer-1 techniques, specifically:
If you were involved in the crypto space in 2017, you would be pretty well-versed with the block size debate. TL;DR, the Bitcoin community split into Bitcoin and Bitcoin Cash because of a significant disagreement over block size. As a result, bitcoin blocks remained 1 MB big, while Bitcoin Cash blocks were around 8 MB big during the split. Currently, Bitcoin Cash blocks are 32 MB big.There are some points one can make about increasing the overall block size.
A larger block will hold more transactions, so it should clear the mempool (queue for pending transactions) much faster.
Miners will be able to charge more fees since larger blocks = more transactions = more transaction fees.
However, there are some faults on the flip side as well:
Increasing the block size of a network requires you to hard fork the protocol, which may not be advisable.
Larger blocks take a longer time to fill up and propagate throughout the network. This increases the chances of double-spends.
Increasing the block size may not be the most logical solution since it doesn’t seem to have an exponential effect on scalability. For example, Bitcoin Cash blocks are currently 32 times larger than Bitcoin’s, yet it’s only 15-16X faster.
Large blocks require a lot of hash power for mining. This is why pools with higher hashrate are bound to mine these blocks more frequently, leading to centralization.
Blockchain Scalability Technique #2: Decreasing Block Time
Another potential technique that a protocol can use at layer-1 is to reduce block time. Block time is the time required by the protocol to mine a block and add it to the blockchain. For example, the expected block time of Bitcoin is 10 mins.The 10 min block time gives the Bitcoin network enough time to know more about what’s going on within the network and focus their efforts on the correct blocks. Think about it like this. If a network’s block time is 2 mins, and it takes 1 min for a miner to know that there is already a valid block propagating in the system, they are wasting 50% of their time. However, in Bitcoin, that waste percentage drops to 10%.However, there is a legit case for short block times as well. In a Byzantine Fault Tolerant security model, faster block times ensure that a chain quickly adopts the correct fork over the incorrect fork and mitigates issues like a double-spending attack.
A consensus algorithm is a mechanism that a decentralized WAN (wide area network) can use to reach an agreement. The idea is to ensure that an agreement can be reached to benefit the entire group as a whole. Satoshi Nakamoto figured out a clever way to create a consensus algorithm that would work even if a large part of the network were actively acting against the entire system. This class of consensus algorithms is called “Nakamoto Consensus.”However, in modern consensus algorithms, you choose a leader from the entire network, and they are solely responsible for mining blocks in the allotted time period. This makes it faster than the standard Nakamoto consensus, wherein governance is dependent on the supermajority of the network.
Blockchain Scalability Technique #4: Sharding
Sharding is a very well-known technique used in databases. Bulky databases are horizontally partitioned into smaller, more manageable chunks called “shards.”By sharding, the blockchain will be able to parallelize its processes and increase overall speed and efficiency. Breaking up the blockchain state into several shards essentially allows the network to divide and conquer.
Conclusion
Do you want to know more about Layer-1 scalability? Then definitely check out the “Genius” version for deeper understanding. Otherwise, stay tuned for the part in this series – Blockchain Scalability Techniques: Layer 2.
Blockchain Scalability Techniques: Layer 1
Scalability is undoubtedly the most significant challenge faced by blockchain protocols. Unfortunately, decentralized protocols like Bitcoin and Ethereum have low scalability since their performance, speed, and throughput aren’t at the level required to support a burgeoning ecosystem.So, what can be done to improve this glaring problem? Developers from around the world are working on various systems and techniques to improve overall scalability. These techniques can be divided broadly into the following:
Layer-1 techniques.
Layer-2 techniques.
So, let’s do a two-parter! Today, in part 1, we will be focussing on Layer-1 techniques.
Layer-1 Blockchain Scalability Techniques
“Layer-1” refers to the main underlying blockchain of the protocol. Layer-1 scalability techniques mean the changes or innovations added to the core blockchain itself to improve performance. Due to the nature of these techniques, it is challenging to integrate them into an already existing protocol. As such, projects are advised to augment their blockchain with these techniques during the early stages of protocol development.We will be looking at the following Layer-1 scalability techniques:
Alright, let’s start off with some controversy! If you were involved in the crypto space in 2017, you would be pretty well-versed with the block size debate. While there are plenty of articles out there covering this topic, here is a small overview of the whole situation:
Due to Bitcoin’s scalability problem, a segment of the community believed that increasing the block size was the way to go forward.
Satoshi Nakamoto had initially put a 1 Mb limit on the blocks to prevent spam transactions.
The pro-block size segment believed this was a temporary measure, and Nakamoto fully intended to remove the limiter later.
Anyway, this segment branched off from Bitcoin and created Bitcoin Cash in 2017. Back then, Bitcoin Cash had a block size of 8 MB.
Bitcoin Cash has since forked further into Bitcoin Cash and Bitcoin SV. Bitcoin Cash now has a block size of 32 MB, while Bitcoin SV has a 128 MB upper cap.
Making a case for increasing the block size
There are some points one can make about increasing the overall block size.
A larger block will hold more transactions, so it should be able to clear the mempool (queue for pending transactions) much faster.
Miners will be able to charge more fees since larger blocks = more transactions = more transaction fees.
Also, if you really think about it, increasing block size did result in boosting throughput. For example, Bitcoin Cash does ~116 transactions per second, and Bitcoin SV does 300 transactions per second. When compared to Bitcoin’s 7-10 transactions per second, that’s pretty impressive.
However, there is another side to this debate.
The dangers of increasing block size
Increasing the block size of a network requires you to hard fork the protocol, which may not be advisable.
Larger blocks take a longer time to fill up and propagate throughout the network. This increases the chances of double-spends. Double-spending is a term used to describe the phenomena when someone uses the same token/digital coin twice.
Increasing the block size may not be the most logical solution since it doesn’t seem to have an exponential effect on scalability. For example, Bitcoin Cash blocks are currently 32 times larger than Bitcoin’s, yet it’s only 15-16X faster.
Large blocks require a lot of hash power for mining. This is why pools with higher hashrate are bound to mine these blocks more frequently, leading to centralization.
In fact, let’s expand on that last point a bit more. Miners in a decentralized network must download and maintain full nodes. In a protocol with larger blocks, it becomes all the more expensive to maintain these blocks. Also, forcing a network to prioritize centralization is not the smartest way to go when the core proposition of your offering is a “decentralized network.”
Blockchain Scalability Technique #2: Decreasing Block Time
Another potential technique that a protocol can use at layer-1 is to reduce block time. Block time is the time required by the protocol to mine a block and add it to the blockchain. For example, the expected block time of Bitcoin is 10 mins. However, the reality is quite different. Check out the following chart on the actual block times of BTC.Image: Bitinfo ChartsAs you can see, the actual block time can vary wildly from the expected block time. So, what exactly determines this block time?In Bitcoin’s case, Satoshi put in a 10-min block time as a compromise between first confirmation time and the amount of work wasted due to chain splits. What do we mean by this?After a miner successfully mines a block, they have to propagate it throughout the network for approval, which takes time. During this propagation time, other miners can still mine a block based on the older snapshot of the blockchain, which would be a complete waste of their efforts, leading to orphaned blocks. The 10 min block time gives the Bitcoin network enough time to know more about what’s going on within the network and to focus their efforts on the correct blocks. Think about it like this: if a network’s block time is 2 mins, and it takes 1 min for a miner to know that there is already a valid block propagating in the system, they are wasting 50% of their time. However, in Bitcoin, that waste percentage drops to 10%.What determines block time?Now, how does a decentralized network maintain a somewhat consistent block time? For that, you need to know about a metric called “difficulty.” The difficulty is a number determined by the protocol that increases or decreases the amount of work required to mine a block in the network. So, when mining is getting easier, the network increases the difficulty metric and vice-versa. But what about shorter block times?Alright, so having a long block time reduces waste and orphan block rate (orphan blocks are perfectly legit blocks that are discarded by the network due to redundancy), which helps ensure finality. Finality is the guarantee that cryptocurrency transactions cannot be altered, reversed, or canceled. However, what about protocols like Cardano and Ethereum, which have a mere 15-20 seconds block time? Have they somehow missed a trick?Well, not quite.In a Byzantine Fault Tolerant security model, faster block times ensure that a chain quickly adopts the correct fork over the incorrect fork and mitigates issues like a double-spending attack. Why is that so? Think about it like this. In a 10 min time period:
Only one block gets added to the Bitcoin blockchain.
Around 30 blocks get added to the Ethereum/Cardano blockchain.
Now, which transaction is more secure from double-spending? The one that’s committed in the latest block, or the one that’s already committed and held securely ~30 blocks in? Probably the latter, right?However, a faster block rate also leads to centralization since entities owning more powerful mining equipment or a larger stake in the protocol will inevitably get to mine more blocks.
A consensus algorithm is a mechanism that a decentralized WAN (wide area network) can use to reach an agreement. The idea is to ensure that the network can reach an agreement to benefit the entire group. We need a special kind of consensus algorithm for decentralized networks like Bitcoin, which deals with hundreds of billions of dollars.This consensus algorithm should work even if many nodes within the network are actively plotting against the protocols themselves. A consensus algorithm that manages to work under such circumstances is known as Byzantine Fault Tolerant.Satoshi Nakamoto figured out a clever way to circumvent this challenge by creating a new class of consensus algorithms called “Nakamoto Consensus.” What is Nakamoto consensus?The idea is simple. Participants in the consensus process must be economically incentivized to participate in the system. Everyone has something to lose if they aren’t acting in the best interest of the system. For example, in a proof-of-work (PoW) system, if a malicious miner isn’t mining blocks on the main chain, they will be doing a lot of wasted work and spending their resources for no reason. Similarly, in Ethereum’s proof-of-stake (PoS), Casper implementation, validators must lock up a stake in the network to mine blocks and participate in governance. The moment they act against the system’s interest, a portion of their stake gets slashed off.While basic Nakamoto consensus algorithms were a major breakthrough, they still need you to get a supermajority of the entire network. On the other hand, leader-based consensus algorithms like EOS’s delegated proof-of-stake (DPoS) and Binance Smart Chain’s proof-of-staked-authority (PoSA) select a delegate or a leader from the entire network, and only that person is responsible for the consensus process.Leader-based consensus protocols are considerably faster since you no longer need to wait for a supermajority from the entire network. Instead, you just need the leader to select the blocks. However, this once again raises the decentralization vs. speed issue. While DPoS is faster than Casper, do you want to compromise speed for decentralization?
Blockchain Scalability Technique #4: Sharding
Sharding is a very well-known technique used in databases. Bulky databases are horizontally partitioned into smaller, more manageable chunks called “shards.”Image CreditWhy sharding is useful in the blockchain contextBy sharding, the blockchain will be able to parallelize its processes and increase overall speed and efficiency. In addition, breaking up the blockchain state into several shards essentially allows the network to divide and conquer.How does sharding look in a blockchain ecosystem?The entire state of the core blockchain state (aka all the data and timestamps maintained within the protocol) is called the “global root.” The state gets broken down into shards, and each shard has its own substate.In the diagram above, the structure that the substates, shards, and the main chain are forming is known as a “Merkle tree.” In a Merkle tree, the data at each level is derived cryptographically from the data on the level precisely above it. The entire “tree” is derived from the global root.For a network like Ethereum, which has ~6,500 nodes, this could be a complete gamechanger. As per the official Sharding FAQ on GitHub, sharding could potentially scale Ethereum up to 10,000+ transactions per second.
Conclusion
To sum up, the four layer-1 scalability techniques that we have discussed today are:
Increasing block size.
Deceasing block time.
Choosing the appropriate consensus mechanism.
Sharding.
However, when you are integrating these techniques into your protocol, do note that you are essentially doing a trade-off between speed and decentralization. In the next part, we will be looking at layer-2 scalability techniques.