***
The Amsterdam Law & Technology Institute is inviting scholars to publish guest articles in the ALTI Forum.
Here is the latest contribution authored by Chris Berg (RMIT University)
***
How we understand where something comes from shapes where we take it, and I’m now convinced we’re thinking about the origins of blockchain wrong.
The typical introduction to blockchain and crypto for beginners – particularly non-technical beginners – gives Bitcoin a sort of immaculate conception. Satoshi Nakamoto suddenly appears with a fully formed protocol and disappears almost as suddenly. More sophisticated introductions will observe that Bitcoin is an assemblage of already-existing technologies and mechanics – peer to peer networking, public-key cryptography, the principle of database immutability, the hashcash proof of work mechanism, some hand-wavey notion of game theory – put together in a novel way. More sophisticated introductions again will walk through the excellent ‘Bitcoin’s academic pedigree’ paper by Arvind Narayanan and Jeremy Clark that guides readers through the scholarship that underpins those technologies.
This approach has many weaknesses. It makes it hard to explain proof-of-stake systems, for one. But what it really misses – what we fail to pass on to students and users of blockchain technology – is the sense of blockchain as a technology for social systems and economic coordination. Instead, it comes across much more like an example of clever engineering that gave us magic internet money. We cannot expect every new entrant or observer of the industry to be fully signed up to the vision of those that came before them. But it is our responsibility to explain that vision better.
Blockchains and crypto are the heirs of a long intellectual tradition building fault tolerant distributed systems using economic incentives. The problem this tradition seeks to solve is: how can we create reliable systems out of unreliable parts? In that simply stated form, this question serves not just as a mission statement for distributed systems engineering but for all of social science. In economics, for example, Peter Boettke and Peter Leeson have called for a ‘robust political economy’, or the creation of a political-economic system robust to the problems of information and incentives. In blockchain we see computer engineering converge with the frontiers of political economy. Each field is built on radically different assumptions but have come to the same answers.
So how can we tell an alternative origin story that takes beginners where they need to go? I see at least two historical strands, each of which take us down key moments in the history of computing.
The first starts with the design of fault tolerant systems shortly after the Second World War. Once electronic components and computers began to be deployed in environments with high needs for reliability (say, for fly-by-wire aircraft or the Apollo program) researchers turned their mind to how to ensure the failure of parts of a machine did not lead to critical failure of the whole machine. The answer was instinctively obvious: add backups (that is, multiple redundant components) and have what John von Neumann in 1956 called a ‘restoring organ’ combine their multiple outputs into a single output that can be used for decision-making.
But this creates a whole new problem: how should the restoring organ reconcile those components’ data if they start to diverge from each other? How will the restoring organ know which component failed? One solution was to have the restoring organ treat each component’s output as a ‘vote’ about the true state of the world. Here, already, we can see the social science and computer science working in parallel: Duncan Black’s classic study of voting in democracies, The Theory of Committees and Elections was published just two years after von Neumann’s presentation of the restoring organ tallying up the votes of its constituents.
The restoring organ was a single, central entity that collated the votes and produced an answer. But in the distributed systems that started to dominate the research on fault tolerance through the 1970s and 1980s there could not be a single restoring organ – the system would have come to consensus as a whole. The famous 1982 paper ‘The Byzantine Generals’ Problem’ paper by Leslie Lamport, Robert Shostak and Marshall Peace (another of the half-taught and quarter-understood parts of the origins of blockchain canon) addresses this research agenda by asking how many voting components are needed for consensus in the presence of faulty – malicious – components. One of their insights was cryptographically unforgeable signatures makes the communication of information (‘orders’) much simplifies the problem.
The generation of byzantine fault tolerant distributed consensus algorithms that were built during the 1990s – most prominently Lamport’s Paxos and the later Raft – now underpin much of global internet and commerce infrastructure.
Satoshi’s innovation was to make the distributed agreement system permissionless – more precisely, to join the network as a message-passer or validator (miner) does not require the agreement of all other validators. To use the Byzantine generals’ metaphor, now anyone can become a general.
That permissionlessness gives it a resilience against attack that the byzantine fault tolerant systems of the 1990s and 2000s were never built for. Google’s distributed system is resilient against a natural disaster, but not a state attack that targets the permissioning system that Google as a corporate entity oversees. Modern proof-of-stake systems such as Tendermint and Ethereum’s Casper are an evolutionary step that connects Bitcoin’s permissionlessness with decades of knowledge of fault tolerant distributed systems.
This is only a partial story. We still need the second strand: the introduction of economics and markets into computer science and engineering.
Returning to the history of computing’s earliest days, the institutions that hosted the large expensive machines of the 1950s and 1960s needed to manage the demand for those machines. Many institutions used sign-up sheets, some even had dedicated human dispatchers to coordinate and manage a queue. Timesharing systems tried to spread the load on the machine so multiple users could work at the same time.
It was not long before some researchers realised that sharing time on a machine was fundamentally a resource allocation problem that could be tackled by with relative prices. By the late 1960s Harvard University was using a daily auction to reserve space on their PDP-1 machine using a local funny money that was issued and reissued each day.
As the industry shifted from a many-users, one-computer structure to a many-users, many-distributed-computers structure, the computer science literature started to investigate the allocation of resources between machines. Researchers stretched for the appropriate metaphor: were distributed systems like organisations? Or were they like separate entities tied together by contracts? Or were they like markets?
In the 1988 Agoric Open Systems papers, Mark S. Miller and K. Eric Drexler argued not simply for the use of prices in computational resource allocation but to reimagine distributed systems as a full-blown Hayekian catallaxy, where computational objects have ‘property rights’ and compensate each other for access to resources. (Full disclosure: I am an advisor to Agoric, Miller’s current project.) As they noted, one missing but necessary piece for the realisation of this vision was the exchange infrastructure that would provide an accounting and currency layer without the need for a third party such as a bank. This, obviously, is what Bitcoin (and indeed its immediate predecessors) sought to provide.
We sometimes call Bitcoin the first successful fully-native, fully-digital money, but skip over why that is important. Cryptocurrencies don’t just allow for censorship-free exchange. They radically expand the number of exchange that can occur – not just between people but between machines. Every object in a distributed system, all the way up and down the technology stack, has an economic role and can form distinctly economic relationships. We see this vision in its maturity in the complex economics of resource allocation within blockchain networks.
Any origin story is necessary simplified, and the origin story I have proposed here skips over many key sources of the technology that is now blockchain: cryptography, the history and pre-history of smart contracts, and of course the cypherpunk community from which Bitcoin itself emerged. But I believe this narrative places us on a much sounder footing to talk about the long term social and economic relevance of blockchain.
As Sinclair Davidson, Jason Potts and I have argued elsewhere, blockchains are an institutional technology. They allow us to coordinate economic activity in radically different ways, taking advantage of the global-first, trust-minimised nature of this distributed system to create new types of contracts, exchanges, organisations, and communities. The scale of this vision is clearest when we compare it with what came before.
Consider, for instance, the use of prices for allocating computer time. The early uses of prices were either to recoup the cost of operation for machines, or as an alternative to queuing, allowing users to signal the highest value use of scarce resources. But prices in real-world markets do a lot more than that. By concentrating dispersed information about preferences they inspire creation – they incentivise people to bring more resources to market, and to invent new services and methods of production that might earn super-normal returns. Prices helped ration access to Harvard’s PDP-1, but could not inspire the PDP-1 to grow itself more capacity.
The Austrian economist Ludwig von Mises wrote that “the capitalist system is not a managerial system; it is an entrepreneurial system”. The market that is blockchain does not efficiently allocate resources across a distributed system but instead has propelled an explosion of entrepreneurial energy that is speculative and chaotic but above all innovative. The blockchain economy grows and contracts, shaping and reshaping just like a real economy. It is not simply a fixed network with nodes and connections. It is a market: it evolves.
We’ve of course seen evolving networks in computation before. The internet itself is a network – a web that is constantly changing. And you could argue that the ecosystem of open-source software that allows developers to layer and combine small, shared software components into complex systems looks a lot like an evolutionary system. Neither of these directly use the price system for coordination. They are poorer for it. The economic needs of internet growth has encouraged the development of a few small and concentrated firms while the economic needs of open-source are chronically under-supplied. To realise the potential of distributed computational networks we need the tools of an economy: property rights and a native means of exchange.
Networks can fail for many reasons: nodes might crash, might fail to send or receive messages correctly, their responses might be delayed longer than the network can tolerate, they might report incorrect information to the rest of the network. Human social systems can fail when information is not available where and when it is needed, or if incentive structures favour anti-social rather than pro-social behaviours.
As a 1971 survey of the domain of fault tolerant computing noted “The discipline of fault-tolerant computing would be unnecessary if computer hardware and programs would always behave in perfect agreement with the designer’s or programmer’s intentions”. Blockchains make the joint missions of economics and computer science stark: how to build reliable systems out of unreliable parts.
Chris Berg
***
Citation: Chris Berg, Reliable Systems Out of Unreliable Parts, ALTI Forum, July 25, 2022
Invited by Thibault Schrepel
1 Comments
Comments are closed.