LogoLogo
  • INTRODUCTION
  • LEARN
    • Espresso in the Modular Stack
    • The Espresso Network
      • System Overview
      • Properties of HotShot
        • EspressoDA
          • How It Works
      • Interfaces
        • Espresso ↔ Rollup
        • Espresso ↔ L1
        • Rollup ↔ L1
      • Internal Functionality
        • Espresso Node
        • Light Client Contract
        • Fee Token Contract
        • Stake Table
          • How the Stake Table Contract Works
        • Smart Contract Upgradeability
    • Rollup Stacks
      • Integrating a ZK Rollup
        • ZK Rollup Architecture
        • Using Espresso
        • Summary of Changes
      • Integrating an Optimistic Rollup
        • Optimistic Rollup Architecture
        • Using Espresso
        • Summary of Changes
  • Guides
    • Using the Espresso Network
      • Integrating Arbitrum Orbit Chain
        • Quickstart with Arbitrum Nitro Rollups
        • Reading Confirmations from the Espresso Network
        • Arbitrum Nitro Integration Overview
          • Using TEE with Nitro
          • Arbitrum Nitro Trust & Liveness Dependencies
        • Migrating Arbitrum Orbit Chains to Espresso
          • Arbitrum Testnet
            • Nitro Testnet
            • Local Deployment (`docker compose`)
      • Using the Espresso Network as a Cartesi application
    • Running an Espresso Node
    • Running a Builder
    • Bridging with the Espresso Network
  • API Reference
    • Espresso API
      • Status API
      • Catchup API
      • Availability API
      • Node API
      • State API
      • Events API
      • Submit API
      • Earlier Versions
        • v0
          • Status API
          • Catchup API
          • Availability API
          • Node API
          • State API
          • Events API
          • Submit API
    • Builder API
  • Releases
    • Mainnet 1
      • Running a Mainnet 1 Node
      • Contracts
      • Rollup Migration Guide
    • Mainnet 0
      • Running a Mainnet 0 Node
      • Contracts
    • Testnets
      • Decaf Testnet Release
        • Running a Node
        • Contracts
      • Cappuccino Testnet Release
        • Running a Node
        • Deploying a Rollup on Cappuccino
        • Benchmarks
      • Gibraltar Testnet Release
        • Interacting with Gibraltar
        • Arbitrum Nitro integration
        • Deploying a Rollup on Gibraltar
      • Cortado Testnet Release
        • Interacting with Cortado
        • OP Stack Integration
          • Optimism Leader Election RFP
      • Doppio Testnet Release
        • Interacting with Doppio
        • Polygon zkEVM Stack Integration
        • Minimal Rollup Example
        • Benchmarks
      • Americano Testnet Release
  • Appendix
    • Interacting with L1
      • Trustless Sync
      • Fork Recovery
      • Bridging
    • Glossary of Key Terms
Powered by GitBook
On this page
  • Experimental Setup
  • Node Information
  • CDN Information
  • Benchmark Run Info
  • Data Calculation
  1. Releases
  2. Testnets
  3. Doppio Testnet Release

Benchmarks

Performance metrics for HotShot consensus in Espresso's Doppio testnet release

PreviousMinimal Rollup ExampleNextAmericano Testnet Release

Last updated 10 months ago

As a part of launching the Doppio testnet, we are publishing benchmarks related to performance of this release of the Espresso Sequencer. The benchmarks show that a network of 1000 nodes attains very high throughput comparable to that of 10 nodes.

The benchmarks for the Doppio testnet are listed below.

Network Size
Committee Size
Average View Time (s)
Throughput (MB/s)
1000
10

1.10

29.41

500
10

1.18

28.74

100
10

1.10

28.52

10
10

0.97

25.25

Experimental Setup

Node Information

  • Nodes are run as tasks in 3 availability zones in the AWS US East 2 (Ohio) region

    • AWS does not release network bandwidth statistics for Fargate. According to blog post trying to detect the bandwidth for Fargate, the bandwidth can vary during short-term runs, but is stable over the long term (as expected for a cloud service provider).

    • All benchmarks are run with 2 vCPUs and 16 GB memory ECS tasks.

    • ECS tasks are connected to the CDN-like nodes using an AWS VPC (virtual private cloud). They are abstracted such that we see them as being in the same network; in reality they are hosted on AWS VMs that we have little insight into.

  • Each node runs the inside a Docker container

    • The example application pre-generates transactions before the run starts. The transactions are randomly generated, dummy transactions about 100 bytes in size. The examples pre-generate transactions so the RNG doesn’t affect performance. Each transaction is then padded with zeros so it is the desired size.

    • We preselect 10 dedicated nodes to submit transactions throughout the benchmark.

CDN Information

  • Two CDN-like servers were run to simulate optimistic conditions: one to handle consensus messages and the other to handle data availability messages (including raw transactions)

  • Nodes poll the CDNs and orchestrator at regular intervals using HTTP/1.1. This interval is configurable, but is always set to 100ms unless otherwise noted

Benchmark Run Info

  • Benchmarks were run for 102 views (in order to reach 100 decide events)

  • At the end of each run each node outputs its statistics: total run time; how many transactions committed; how many views happened

  • The reported benchmarks are averaged across 3 runs of the same benchmark

Data Calculation

  • Total run time = average total run time from all nodes

  • Average view time = number of views completed / total run time

  • Total transactions submitted = total transactions submitted to the CDN-like node

  • Total transactions committed = total transactions committed by HotShot during the run

  • Transactions / sec = total transactions committed / total run time

    • The transactions in our benchmarks are configured to be quite large (on the order of 1 MB). We configure transactions to be large to more closely simulate the PBS scenario and to avoid extra tuning of parameters that could affect performance (e.g. the batch size of transactions that the CDN node returns per request)

  • Throughput = size per transaction * total transactions committed / total run time

    • Throughput uses 1000 as the divisor for KB, MB, etc. instead of 1024

Each server was an EC2 instance with 4 vCPUS, 16 GB memory, and up to 12.5 Gigabit

Each server ran 1 instance of the and 1 instance of Nginx

The consensus web server also ran the , which orchestrates the start of a run with certain parameters

ECS Fargate
this
HotShot examples
m6a.xlarge
bandwidth
HotShot web server
HotShot orchestrator