Is someone bothering about performance evaluation in blockchain systems?

I am, and I hope I am not the only one.

Image Source: (Thanks!)

Yesterday, we were discussing in one of our technical meetings of Alastria’s core technical team about the minimum performance requirements we should enforce for our infrastructure. Preliminarily, we decided that our main-net should easily support 1000 TPS (transaction per second) with no quality degradation. Is this ambitious? We don’t really know as we haven’t measured a baseline yet. In fact, we don’t even have a clear idea of how to evaluate the performance of a blockchain network.

After the meeting, I started researching about the topic, and I found a great divulgative article from InfoSys explaining the three (for me there are four) dimensions that could (and IMHO should) be tested on a blockchain system:

  • Smart Contract Testing: This seems obvious. Your smart contracts should be functionally tested to check that they do what you expected them to do (let’s avoid another DAO Attack). Alastria has no use case, so, apart from the smart contracts related to Identity (or any other auxiliary smart contract for the infrastructure), the core technical team is not responsible for these tests (thank goodness, enough work we have already :) ).
  • Peer/Node Testing: This testing dimension is less obvious and pretty challenging. We need to ensure that the consensus algorithm works correctly under normal conditions, under high throughput or congestion scenarios, and in the presence of misbehaving and malicious nodes. In short, under any scenario, the ledger should keep its consistency and be synchronised in every node of the network.
  • Security Testing: Pretty clear, right? The consensus algorithm protects us from Byzantine Fault Tolerant Attacks but, are we completely sure we are not leaving any kind of backdoor in the P2P protocol, in the nodes configuration, or through bugs in smart contracts?
  • Performance Testing: According to the expected size of the network, the size of the transactions, the consensus protocol used, the required latencies and the average number of transactions being performed in the system, we need to see if the system keeps working with no degradation in the service.

And in this last dimension is where the 1000 TPS requirement falls. I kept researching about the topic. How could we test if this performance requirement is fulfilled in Alastria? As we are currently using Quorum with IBFT as consensus algorithm, I started researching for performance evaluations and testing tools for these two guys. And I found this awesome resource. We may see there some of the specifications of the functionality tests that were performed over IBFT by its authors before releasing it in Quorum. These tests fall in the category of “Peer/Node Testing”, but they give us a good overview of how to approach tests over the system. Unfortunately, we don’t see any performance evaluation on these specifications…

Surfing through the docs I found this page. At the end of it we find the specification of a performance test over IBFT, and the results obtained in the benchmark. The results show a maximum transaction throughput of 835 TPS in IBFT. This test, however, is performed for the consensus algorithm isolated, not of its embedded version in Quorum. As we may see here, no performance tests of IBFT + Quorum have been performed (or published) yet.

However, these resources give us the basis to approach the performance tests we need in Alastria. Even more, that value of 835 TPS in IBFT’s performance benchmark are good news, as it means that our preliminary performance requirement of 1000 TPS is not that far. Our next step is to define an environmental setup, to approach these tests in Alastria. I will keep researching this matter, but hopefully for next week I will have a environmental testing proposal. Nevertheless, feel free to share with me your ideas, discuss about this or help us in any way.

Research at Protocol Labs | Avid reader seeking for constant innovation. [] []