Benchmarks

As a part of launching the Doppio testnet, we are publishing benchmarks related to performance of this release of the Espresso Sequencer. The benchmarks show that a network of 1000 nodes attains very high throughput comparable to that of 10 nodes.

The benchmarks for the Doppio testnet are listed below.

Network SizeCommittee SizeAverage View Time (s)Throughput (MB/s)
1000
10

1.10

29.41

500
10

1.18

28.74

100
10

1.10

28.52

10
10

0.97

25.25

Experimental Setup

Node Information

  • Nodes are run as ECS Fargate tasks in 3 availability zones in the AWS US East 2 (Ohio) region

    • AWS does not release network bandwidth statistics for Fargate. According to this blog post trying to detect the bandwidth for Fargate, the bandwidth can vary during short-term runs, but is stable over the long term (as expected for a cloud service provider).

    • All benchmarks are run with 2 vCPUs and 16 GB memory ECS tasks.

    • ECS tasks are connected to the CDN-like nodes using an AWS VPC (virtual private cloud). They are abstracted such that we see them as being in the same network; in reality they are hosted on AWS VMs that we have little insight into.

  • Each node runs the HotShot examples inside a Docker container

    • The example application pre-generates transactions before the run starts. The transactions are randomly generated, dummy transactions about 100 bytes in size. The examples pre-generate transactions so the RNG doesn’t affect performance. Each transaction is then padded with zeros so it is the desired size.

    • We preselect 10 dedicated nodes to submit transactions throughout the benchmark.

CDN Information

  • Two CDN-like servers were run to simulate optimistic conditions: one to handle consensus messages and the other to handle data availability messages (including raw transactions)

  • Each server was an m6a.xlarge EC2 instance with 4 vCPUS, 16 GB memory, and up to 12.5 Gigabit bandwidth

  • Each server ran 1 instance of the HotShot web server and 1 instance of Nginx

  • The consensus web server also ran the HotShot orchestrator, which orchestrates the start of a run with certain parameters

  • Nodes poll the CDNs and orchestrator at regular intervals using HTTP/1.1. This interval is configurable, but is always set to 100ms unless otherwise noted

Benchmark Run Info

  • Benchmarks were run for 102 views (in order to reach 100 decide events)

  • At the end of each run each node outputs its statistics: total run time; how many transactions committed; how many views happened

  • The reported benchmarks are averaged across 3 runs of the same benchmark

Data Calculation

  • Total run time = average total run time from all nodes

  • Average view time = number of views completed / total run time

  • Total transactions submitted = total transactions submitted to the CDN-like node

  • Total transactions committed = total transactions committed by HotShot during the run

  • Transactions / sec = total transactions committed / total run time

    • The transactions in our benchmarks are configured to be quite large (on the order of 1 MB). We configure transactions to be large to more closely simulate the PBS scenario and to avoid extra tuning of parameters that could affect performance (e.g. the batch size of transactions that the CDN node returns per request)

  • Throughput = size per transaction * total transactions committed / total run time

    • Throughput uses 1000 as the divisor for KB, MB, etc. instead of 1024

Last updated