According to Dasgupta (2014), the development of software started in 1949 at a conference in the UK, when John Wheeler presented rules for the way in which hardware in the form of a computer can be programmed to execute certain tasks. The rules that Wheeler formulated for such computer programming were later referred to as software, Dasgupta explains. In Dasgupta’s view, the development of software led to a new symbiosis between “the physical computer built of electronic and electromechanical components and the liminal, quasi-autonomous ‘two-faced’ artifact called computer program (or later software) that would serve as the interface between the human user and the physical computer”.
In the 1970s, US computer scientist Leslie Lamport studied how distributed entities can be interconnected through software and be able to strike up error-free collaboration through reliable mutual communication. This leads Lamport to the following argument: “A distributed system consists of a collection of distinct processes which are spatially separated and which communicate with one another by exchanging messages. A network of interconnected computers, such as the ARPA system, is a distributed system” (1978 , pp. 558). A reliable communication process between distributed entities, which is necessary to reach consensus, is defined by Lamport as a set of events with a predefined structure, or as he phrases it: “We assume that sending a message is an event in a process” (1978, pp.559).
In 1998 , Lamport published an article in which he presents a consensus principle for the reaching of consensus between fault-tolerant and distributed systems. In this same article, Lamport notes that one of the most common problems that distributed systems face is that they can never be sure which systems are available or still adequate for participation in the required communication process to reach mutual consensus. To solve this problem, Lamport proposes a system where “each entity maintained a ledger in which he recorded the numbered sequence of decrees that were passed” (1998, pp. 2).
Each system is assigned a ledger in which the entity itself is supposed to record decisions using indelible ink, so as to ensure that these rules cannot be changed or erased later. The entities will always have this ledger with them, allowing them to continuously be able to consult previously made decisions. To take part in voting, the entities need to be physically present in the voting process and use messages that are sent and received between the entities.
In 2008, a person who goes by the name Nakamoto  published a paper online, which opened with the following observation: “Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments” (2008). Nakamoto assumed that the buying and selling of goods and services was, at the time, increasingly performed based on transactions realised through communication between networked - such as the Internet - computers and their software. This led him to claim the following: “What is needed is an electronic payment system based on cryptographic proof instead of trust.” The electronic payment system called Bitcoin is, Nakamoto explains, intended to enable “any two willing parties to transact directly with each other without the need for a trusted third party”.
The rules that Wheeler formulated for such computer programming were later referred to as software, Dasgupta explains. In Dasgupta’s view, the development of software led to a new symbiosis between “the physical computer built of electronic and electromechanical components and the liminal, quasi-autonomous ‘two-faced’ artifact called computer program (or later software) that would serve as the interface between the human user and the physical computer”.
This payment system furthermore assumes that the random participants in the network, or nodes, can freely leave and rejoin the network, “accepting the proof-of-work chain as proof of what happened while they were gone”. When it comes to validation of transactions, Nakamoto uses a consensus algorithm. Nakamoto: “Any needed rules and incentives can be enforced with this consensus mechanism.” In 2018, a research proposal focused on a Blockchain of Things, which I have referenced in a previous article (2018  ), was submitted to the ITU. In this proposal, consensus is defined as “a broader term overarching the entire flow for a blockchain of things transaction, in which the entities involved in a BoT generate agreements and to confirm the correctness of the BoT transaction”.
In 2018, two Rotterdam University of Applied Sciences students joined IT firm Centric in Gouda to work on their (final-year) research on blockchain technology. Florian van Herk  (Computer Science) studied the realisation of the Paxos consensus using software nodes, while Dirk-Pieter Jens  (Business Informatics & Management) did research on the available data that can potentially be used by software nodes in the blockchain pilot that he ran.
Florian departed from a private or permissioned network that includes only identified nodes. He built the Paxos consensus algorithm using .NET code programming software to show how consensus procedures between software nodes work, and what this collaboration can mean. Initially, he split his research brief into two parts, beginning by building a network layer within which software nodes can function well, on which he states the following: “This network layer would allow nodes to communicate with each other in a peer-to-peer manner, which means that every node communicates which each other” (2018:18). To be able to build these software nodes, he first analysed Lamport’s article and related articles in great depth. Based on the knowledge he thus acquired, he moved on as follows: “An implementation of the Paxos algorithm was written in .NET core. Besides, multiple tests have been written to understand the qualities and shortcomings of the Paxos algorithm” (2018:3).
Exceeding his expectations, the software nodes he developed turned out to collaborate effectively in the network. Through a continuous voting process, they are autonomously able to reach consensus on the information transactions they have to perform jointly in an asynchronous communication environment without any kind of third-party intervention. The software nodes record the data used in the procedure in a distributed manner, in a ledger that therefore becomes a distributed ledger. Based on his findings, Florian concludes as follows: “The protocol works well on the built network layer, that is, no abnormalities have been observed, and since .NET has great async support, writing asynchronous functions for an asynchronous environment poses no problem. Various test cases have been written based on the Synod Protocol, to showcase its functionality, and showcase its fault tolerant properties” (2018:51).
The software nodes that he developed enabled Dirk-Pieter Jens to study whether it would be possible to create a reliable blockchain technology-based communication system between software nodes for the logistics industry, such as for warehouse management systems, transport management systems or on-board computers for lorries. What is striking is that, based on the analyses performed, there turns out to be more standard data for use in a consensus network available in the logistics industry than expected. The combination of the development of the software nodes and the analysis of the data available for use in the network turns out to be an effective multidisciplinary collaboration to boost research on the use of consensus possibilities between existing software nodes.
The software is currently, following the graduation of both students, being reused in PhD research focused on reliable exchange and sharing of data and information between distributed software entities based on the Building Information Modelling process. Further research is being conducted on future possibilities for the exchange and sharing of data and information between distributed software entities within government networks. And finally, there is also research ongoing on the application and use of blockchain technology within the realm of cybersecurity.
In essence, blockchain technology is software. Using this software, software nodes that operate in a distributed manner and are combined with hardware can be networked and achieve reliable intercommunication. This intercommunication can subsequently be used by software nodes as a resource that enables joint decision making without third-party, i.e. human, intervention. The thought that software nodes that function in a distributed manner no longer need humans to make decisions with a potential impact on humans is a development that not only calls for reflection on the software itself, but also on the ethical aspects attached to these kinds of decision-making processes.
1. Lamport, L. Time, Clocks, and the Ordering of Events in a Distributed System. Communication of the ACM, July 1978, Volume 21, Number 7. Pp. 558-565
2. Lamport, L. (1998) The Part-Time Parliament ACM Transactions on Computer Systems 16, 2, (May 1998). Pp. 133-169
3. Nakamoto, S. (2008) Bitcoin: A Peer-to-Peer Electronic Cash System www.bitcoin.org
4. Lier, B. van (2008) Blockchain-of-Things https://www.centric.eu/NL/Default/Themas/Blogs/2018/05/16/Blockchain-of-Things
5. Herk, F. van (2018) Paxos Blockchain. A Private Blockchain Simulation Based on the Paxos Consensus Algorithm. Rotterdam University of Applied Sciences, June 2018
6. Jens, D. P. (2018) CO2 footprint op de blockchain. Een verkenning van de mogelijkheden voor een real time berekening van CO2 uitstoot met gebruikmaking van blockchain-technologie. [Blockchain CO2 footprint. An exploration of the possibilities for a real-time calculation of CO2 emissions using blockchain technology]. Rotterdam University of Applied Sciences, June 2018
Ben van Lier works at Centric as Director Strategy & Innovation and, in that function, is involved in research and analysis of developments in the areas of overlap between organisation and technology within the various market segments.
Deze blog verscheen eerder op www.centric.eu.
27 en 28 maart 2019 Kom naar de zesde editie van ons jaarlijkse congres met wederom een ijzersterke sprekers line-up. In twee intensieve dagen behandelen wij belangrijke thema’s als Big Data, Agile Datawarehouse Design, Analytics, Machine Learn...
10 april 2019Praktisch seminar waarin Sander Hoogendoorn u laat zien hoe u microservices kunt inzetten in uw softwarearchitectuur.Het nieuwste architectuurprincipe microservices lijkt veelbelovend: verkorte time-to-market, schaalbaarheid, autonomie, ...
17 en 18 april 2019 De wereld van business intelligence en datawarehousing hanteert een unieke terminologie en eigen verzameling technologieën, ontwerptechnieken en producten. Voor nieuwkomers kan dit overweldigend overkomen. Want wat betekenen ...
15 mei 2019Workshop met BPM-specialist Christian Gijsels over business analyse, modelleren en simuleren met de nieuwste release van Sparx Systems' Enterprise Architect, versie 14.Intensieve cursus waarin alle basisfunctionaliteiten van Enterprise Arc...
16 mei 2019 Iedere organisatie heeft te maken met het integreren van systemen en applicaties. Maar welke technologie zet u in bij welke vorm van integratie? Guy Crets bespreekt de verschillende oplossingen voor integratie. Integratie van IT-sys...
5 en 6 juni 2019 Praktische tweedaagse workshop met internationaal gerenommeerde spreker Alec Sharp over herkennen, beschrijven en ontwerpen van business processen. De workshop wordt ondersteund met praktijkvoorbeelden en duidelijke, herbruikbar...
17 t/m 19 juni 2019Driedaagse workshop over requirements management door James Robertson en Adrian Reed. Opstellen, testen en ondubbelzinnig vastleggen van requirements. Unieke driedaagse workshop over requirements management op basis van de Volere m...
20 juni 2019 API Management gaat enerzijds over het promoten van API’s en anderzijds over het actief ondersteunen van ontwikkelaars bij het gebruik ervan. Tegelijkertijd gaat API Management over het gecontroleerd en centraal beveiligd ontsluite...