I have a birthday coming up. Each year, my family tries to surprise me in some way, but it doesn’t always work. The more people coordinating the plan, the harder it becomes to maintain the surprise – especially if one or more of the schemers aren’t the most reliable at keeping it quiet.
Birthday surprises remind me of a thought experiment called the Byzantine Generals problem. It describes a situation in which multiple armies (presumably in the Byzantine Empire) are planning a coordinated surprise attack. Success depends on the generals of the armies being able to secretly coordinate the attack at a particular time. The problem is that the generals can’t trust messengers from coordinating armies since they could have been captured or corrupted by an opposing party. This is a lot like coordinating nodes in the peer-to-peer (P2P) networks I mentioned in a previous post where nodes (generals) communicate using unreliable channels (messengers).
This problem has formally been proven to be unsolvable, but several clever techniques have been implemented in decentralized architectures to solve the problem in practice using Consensus Algorithms. Consensus is fundamental to making decentralized systems work and I’ll describe them further in this post.
Centralized vs Decentralized Architectures
To understand consensus, let’s first take a closer look at Centralized and Decentralized Architectures.
Traditional centralized applications for file sharing, data storage, communication and even virtual currencies have all existed for a long time. It’s just that centralized architectures often place shared resources (files, databases or code) on a single server or possibly multiple load balanced clones. Applications are therefore only as resilient as the central server(s).
Decentralized P2P architectures have proven useful to address resiliency problems associated with centralized applications. Although this reduces or eliminates reliance on centralized servers, it comes at a cost.
P2P nodes in decentralized applications such as BitTorrent, Gnutella, Skype and Bitcoin must assume responsibility for both managing and consuming resources among all peers.
These resources may include data, documents and code. A file sharing node in BitTorrent or Gnutella would share an index (data) of available files across the P2P network, serve files that it has and access those resources from other nodes.
Similarly, a blockchain node on Bitcoin is responsible for executing cryptographic code on transactions as well as maintaining a copy of the distributed ledger (data).
This brings us to the topic of consensus among participants in a P2P network.
Since application resources are distributed among peers in a decentralized system, it becomes important to put all the pieces together without losing anything. P2P nodes of a distributed system must be able to “agree” to the validity of their resources. The state of agreement is called consensus. It’s worth noting that the mechanics of how participating nodes reach agreement and the consequences of failure to reach agreement will depend on the purpose of the application. For example, a microblogging service running a distributed database may be satisfied with the occasional dropped or eventually consistent post shared among nodes. A transaction between nodes on a cryptocurrency network will likely have more stringent consensus requirements.
Consensus is a core requirement in decentralized systems and is implemented using one of several consensus algorithms running on each node. These algorithms reach consensus when a majority of participating peers agree. Failure or unreliability of some participating nodes can be compensated by other participating nodes.
Blockchain Consensus Algorithms
Bitcoin and other blockchain-based cryptocurrencies achieve consensus by requiring that nodes complete some computation that’s hard to do, but easy to verify. This is called Proof of Work and is part of the role of Bitcoin miners.
Proof of Work consensus has its roots in an earlier proposal called Hashcash that was intended to reduce email spam. Briefly, senders of email would be required to execute a cryptographic hash of the message which would be verified before the email was delivered. Successful verification would be considered proof of the (cryptographic computational ) work done in creating the hash. The work would be trivial for the average email user, but would be a disincentive for the malicious bulk mailer.
Cryptocurrency miners use Hashcash proof of work to verify that transactions reach consensus across participating nodes on the blockchain. Miners (nodes) are incentivized to validate transactions with the allocation of a “block reward” paid in the cryptocurrency being mined. But mining is designed to become more computationally intensive while reducing the block reward over time. The bitcoin block reward was halved from 25 BTC to 12.5 BTC on July 9th this year and will be cut into half again in 2020. This results in an arms race for a combination of faster computation (hash power) and cheaper electricity to run the equipment, which can be wasteful and environmentally unfriendly.
Some blockchains have responded by implementing non-proof of work consensus algorithms such as Proof of Stake or Practical Byzantine Fault Tolerance (PBFT). The Ethereum blockchain and cryptocurrency is planning a move to Proof of Stake to address some of the problems in Proof of Work including centralization trends and long(ish) block times. The Hyperledger project uses PBFT to reach consensus among blockchain nodes without mining.
In my next post, I’ll take a closer look at the architecture of Bitcoin and Ethereum blockchains and blockchain applications.
This post first appeared in Faisal’s blog, Musings About Tech.
See the related post: Decentralization – The Napster-Metallica connection.
Faisal Siddiqi — Distinguished Engineer
Faisal Siddiqi is an expert in mobile technologies, user experience (UX) design, performance engineering, Application Programming Interface (API) design and open source.
Much of his work focuses on the insurance industry. As architect for CSC Mobile Insurance, Faisal directs all design, training, learning and development activities and sets technology direction and mentoring for the team. He is establishing procedures to cross-train mobile with non-mobile team members, as mobile technologies can no longer be separated from mainstream product development.
See Faisal’s bio.