Quickstart with Arbitrum Nitro Rollups
TL;DR
The Espresso Network is a confirmation layer that provides chains with information about the state of their own chain and the states of other chains, which is important for cross-chain composability. Espresso confirmations can be used in addition to the soft confirmations from a centralized sequencer, are backed by the security of the Espresso Network, and are faster than waiting for Ethereum finality (12-15 minutes).
Overview
Purpose
This guide is designed to help you deploy your own rollup or Arbitrum Orbit chain integrated with the Espresso Network. It primarily focuses on trying out the integration in a testable environment, with a section at the end for deploying on the mainnet. Please note that the guide is based on certain trust assumptions and may not be suitable for production use. It provides instructions for running the rollup locally using Docker or on the cloud.
How It Works
In a regular chain, the transaction lifecycle will look something like this:
A user transacts on an Arbitrum chain.
The transaction is processed by the chain's sequencer, which provides a soft-confirmation to the user, and the transactions are packaged into a block.
The sequencer, responsible for collecting these blocks, compressing, and submitting, submits the transactions to the base layer.
If the base layer is Arbitrum One or Ethereum, the transaction will take at least 12-15 minutes to finalize, or longer, depending on how frequently the sequencer posts to the base layer.
In this transaction lifecycle, the user must trust that the chain's sequencer provided an honest soft-confirmation and will not act maliciously. There are limited ways to verify that the sequencer and batcher acted honestly or did not censor transactions.
This reliance on trust is a strong assumption, and it's where the Espresso Network provides significant benefits. When the chain is integrated with the Espresso Network, the following enhancements occur:
The sequencer provides a soft confirmation to the user, while the transactions are also sent to the Espresso Network to receive a stronger confirmation secured by Byzantine Fault Tolerance (BFT) consensus.
A software component of the sequencer, called the batch poster (henceforth referred to as "batcher"), operates inside a Trusted Execution Environment (TEE) and must honor the Espresso Network confirmation. It cannot change the ordering or equivocate.
This setup provides a strong guarantee that the transaction will ultimately be included and finalized by the base layer.
While the user must still trust that the chain's sequencer has provided an honest soft confirmation, the Espresso Network offers a stronger confirmation that holds the sequencer accountable and prevents it from equivocating or acting maliciously. The initial implementation of the batch poster is permissioned, and the user must trust that it will not reorder blocks produced by the sequencer.
For a comprehensive overview of how the Espresso Network integrates with your rollup—including details on the architecture, component interactions, and overall flow—please refer to this integration guide.
Running Your Own
Integrating with the Espresso Network requires minimal changes to Arbitrum Nitro's existing rollup design. The Espresso Team has already done that, and in the following sections we will provide a comprehensive guide for running your own instance and building on Espresso!
Components
We model the rollup as a collection of three components:
The sequencer
The batcher
The TEE contract (which we mock in this example)
Deploying The Cloud Arbitrum Orbit Chain
Please note that these documents are to facilitate the deployment of a testable instance of the Arbitrum Orbit Chain with Espresso and should not be assumed to be production-ready infrastructure.
Note: This guide is based on deploying your on own rollup on Arbitrum Sepolia. A dedicated section at the end of the guide outlines all the modifications needed for deployment on Arbitrum One.
0. Install Requisite Dependencies
Ensure you have Node.js 16, yarn, foundry, git and build-essential tools installed on your system before proceeding.
1. Deploy the Contracts
First, clone the contracts repository and set up the development environment:
Install the dependencies and build the project (this may take several minutes):
Create a .env
file with the following variables:
You can get your Arbiscan API key by going to here. You need an account in order to get one. As for the private key, choose any address you want to use as the owner of the rollup. The amount required to deploy all the rollup contracts is arount 0.15 ETH.
This contract above is a mock TEE verifier that will be used to test the rollup by always returning true for any input. In this guide, we are thus assuming that the batch poster will not act maliciously because it is not operating inside a TEE.
2. Configure Deployment
There is a config.example.ts file in the scripts
folder that show you how the config file should look like. There is also a config.template.ts file that you can use to create your own config file.
Rename
config.template.ts
toconfig.ts
2.Update the following values inconfig.ts
:
Important Notes: Important Notes:
chainId: Ensure that the
chainId
values in both configuration fields are identical and unique.Validators/Stackers: The
validators
array only requires one address, though you may add more if needed. These addresses need a minimal amount of funds (approximately 0.00003 ETH) each time they stake.Batch Poster: The
batchPosterAddress
andbatchPosterManager
can be the same, but they should differ from the validators. A very small amount of funds (approximately 0.00001 ETH) is required for posting batches.
3. Run Deployment
Execute the deployment script:
Note: You can ignore the message "env var ESPRESSO_LIGHT_CLIENT_ADDRESS not set..." - this is only needed for RollupCreator deployment.
Add deployed rollup creator address to .env
The previous deployment script will output the address of the rollup creator. Add this address to your .env
:
Deploy the Rollup Proxy Contract
Keep the terminal opened with logged addresses, as well as the block number and others when configuring the chain in the latest sections of this guide. You can also find most of the contract addresses in espresso-deployments/arbSepolia.json
. The upgrade executor contract address from this json file will be needed for the next section.
Configuring and Running the Chain
The docker configuration can be found in the espresso-build-something-real repository.
1. Clone and Configure the Repository
2. Update Configuration Files
You'll need to modify two configuration files with the deployment addresses, keys, ids and rpc url from the previous steps:
Files to update:
config/full_node.json
config/l2_chain_info.json
Required updates:
In
config/l2_chain_info.json
:Set
chainId
underchain-config
to match your rollup's chain IDSet
InitialChainOwner
to the address of the rollup ownerSet the rollup smart contract addresses from the previous deployment
Update
deployed-at
to the block number where rollup proxy was created
In
config/full_node.json
:Add the arbitrum RPC URL with your API key to the
url
fieldSet
id
underchain
to match your rollup's chain IDUpdate
private-key
for both stacker (validator) and batch poster addresses
3. Run the Chain
For those seeking to evaluate their infrastructure and to get a clearer picture of what a "working" implementation looks like, we have made available a Docker Compose configuration for local development. The configuration included in this repository is ready to use as is – it will run your rollup locally. For cloud deployment details, see the Cloud Configuration section at the end of the guide.
Start the chain using Docker:
Understanding the Startup Process
During startup, you'll see various logs and warnings. Here's what to expect and how to interpret them:
Initial Staker Warnings
This is normal, this means that staker doesnt have any new nodes to stake on.
Batch Validation Process
These logs show the batch poster working to validate and process transactions.
Successful Batch Processing When you see the following logs, it indicates successful batch processing:
Important Notes:
Update to Batch posting can take from 1-30 minutes after a user has sent the transaction.
Verify successful operation by checking the sequencerInbox contract on the Arbitrum Sepolia explorer.
Occasional staker warnings occur when there are no new nodes to stake on.
4. Testing the Chain
To verify your chain is running correctly:
Check Confirmed Nodes by the Validator/Staker
Test bridge functionality:
Note: Bridging transactions can take up to 15 minutes to finalize.
Verify your balance:
Test sending transactions:
For a more consistent test, you can also continuously send transactions to the rollup. This approach simulates a more realistic environment by continually submitting transactions, allowing you to see how the system handles ongoing activity. (See the next section for details.)
Check recipient balance:
If successful, the recipient's balance should show 1 wei or the amount you sent if different.
Transaction Flow Generator
If you want to generate test transactions on your rollup, navigate to the tx-generator
repository subfolder and follow the README instructions:
This script continuously generates transactions to help you evaluate your rollup and the Espresso Network.
Hotshot Query Tool
You can also use this project in conjunction with the transaction generator to verify that the transactions you generate are properly submitted to Hotshot. By inputting the correct chain ID in the config, the Hotshot Query Tool—a simple Go project—fetches and prints namespace transactions from the Hotshot query service. This tool sends HTTP requests and can be easily adapted for other API endpoints as needed.
Deploying Your Rollup on Mainnet
To deploy your rollup on Arbitrum mainnet, update your configuration files with the appropriate parameters and follow the guide. Below are the key changes you need to make, along with references to the relevant sections of this guide:
TEE Verifier Address: Set the mock Espresso TEE verifier address to:
Mainnet:
0xE68c322e548c3a43C528091A3059F3278e0274Ed
Testnet:
0x8354db765810dF8F24f1477B06e91E5b17a408bF
Refer to Deploy the Contracts.
Network Selection: Change the network in the nitro-contracts repository from
arbSepolia
toarb1
when running smart contract and deployment scripts. Refer to Run Deployment.Batch Poster Settings: Update the batch-poster configuration in the
config/full_node.json
file by:Changing the hotshot URL to
https://query.main.net.espresso.network/v0
Setting the light-client address to
0x61f627c6785503b6d83e4d59611af7361210ae64
. Refer to Update Configuration Files.
Parent Chain ID: In
l2_chain_info.json
, change theparent-chain-id
from421614
to42161
and optionally adjust thechain-name
. Refer to Update Configuration Files.RPC Endpoint:
In
full_node.json
, update the RPC URL to the mainnet endpoint.When testing, change the RPC URL from
https://arbitrum-sepolia-rpc.publicnode.com
tohttps://arbitrum-one-rpc.publicnode.com
. Refer to both Update Configuration Files and Testing the Chain.
Apply these modifications to ensure your rollup is properly configured for mainnet deployment.
Cloud Configuration
Info: If you want to get up and running with this and you already know about cloud configuration, you can take the docker compose file and modify it as needed. If you aren't familiar with cloud environments, read on. (Note: This setup is for AWS, but it should work with any cloud.)
Booting A Chain on EC2
The first step is to launch an ec2 instance, which is a simple process. First, go into the console and either search for ec2, or select it from the quick select if you've used it before.
From here, we can configure the EC2 launch configuration. You can leave everything default, but feel free to change the settings if you've done this before. Under Instance Type
you can select t3.medium
or t3.large
, but any cloud instance with at least 4 gigabytes of RAM and 2 CPU cores should be sufficient.
Info: Please note that in our testing
t3.medium
seems to meet the requirements, but if you encounter instability, you might need to upgrade to the larger instance.
From here, make sure you configure your key pair, otherwise you will not have SSH access to the machine. You can use the auto generated security group for the instance. Make sure you allow SSH traffic as well. The following image should mostly reflect your configuration:
Security Note: Keep your key pair secure and do not share it with others. Ensure that the permissions on your key file are set to be readable only by you (e.g.,
chmod 400 your-key.pem
).
Preparing Your Environment
Move Key to .ssh Folder:
Move your key from the downloads folder or any other folder to the
.ssh
folder:
Test Your Connection:
Use
ping
to test your connection with the IPv4 address:If you encounter "Connection refused" or "Communication prohibited by filter," try using a different network, like mobile data.
Connect to Your EC2 Instance:
Run this command to connect to your EC2 instance using your key:
Note: Replace
your-host
with either the public DNS or IP address of your EC2 instance.
Transferring Files to Your EC2 Instance
Create Config Folder:
On another terminal, run the following to create a config folder on your instance:
Note: For this first step, you can also run the mkdir command from the instance terminal.
Move Config Files to Instance:
From the root of your git repo, transfer your config files to the instance's config folder:
Move Docker File to Instance:
Similarly, transfer the Docker Compose file to your instance:
Configuring Docker
From inside your instance, install docker with the following steps:
You can now log out and back in, and your user, ec2-user, should have Docker access without needing sudo. The last thing we need is Docker Compose. Unfortunately, at the time of writing, Amazon Linux 2023 has an older distribution of Docker, which does not yet support the compose subcommand. To access Docker Compose, you need to download it by executing the following steps:
Note: Before continuing, exit and reconnect to your instance by running exit in the terminal.
Running Docker Compose
Connect to EC2 Instance:
Return to the terminal connected to your EC2 instance, or reconnect if you have logged out. You can find the connection command in the Preparing Your Environment section.
Run Docker Compose:
Execute the following command to start your services:
Handle Permission Errors:
If you encounter a permission error like:
Create the necessary folders and set permissions:
This gives Docker container's user permission to write to these directories.
Testing the Connection
You can now that the connection to your rollup by checking the balance:
Last updated