Overview

Span Batch is a new batch spec that minimizes L1 data post by removing overheads and encoding a range of consecutive L2 blocks into a single batch.

We expect Span Batch to alleviate maintenance costs for OP Chains by minimizing L1 data posts, especially for sparse or inactive chains.

Span Batch requires a hard-fork since it’s a consensus-breaking change. Specs and codes should be carefully reviewed.

Kudos to Proto for the initial design 🍍

Backgrounds

Summary

  1. OP Chains are required to post at least 12,735 bytes every 590 seconds even though there are no transactions—requires 0.45 ETH a day (reference).
  2. This overhead is because the current batch spec requires posting information every L2 block to L1.
  3. The new batch spec, Span Batch, reduces the overhead by up to 97% by encoding multiple consecutive blocks into a single batch when the chain is inactive.

The OP Chain is required to post 12,735 bytes every 590 seconds even though there’s no transaction with the current specs**—**requires 0.45 ETH a day at 13.7 gwei (reference). This is mainly because the current batch spec requires posting every L2 block’s information to L1 as follows:

The current batching spec

The current batching spec

The batch should be posted in every L2 block as above scheme, even though transaction_list is empty due to there being no transaction in the L2 block. Means, there’s a fixed-length overhead for every L2 block.

Proto wrote an initial spec of Span Batch to minimize the mentioned fixed overhead for sparse chains by encoding multiple consecutive blocks into a single batch.

Detailed Solutions

Encoding Consecutive Blocks into a Single Batch

The Span Batch aims to reduce the data overhead created by every L2 block while preserving the safe chain derivation. We can achieve the goal by accumulating consecutive L2 blocks into a single batch, removing repeated and expected information, and adding enough information to derive consecutive blocks.

For instance, we don’t have to post parent_hash for every L2 block when a batch represents consecutive L2 blocks. Because the node only needs to verify if the first L2 block of the batch starts from its safe head.

We can save 3200 bytes when a batch contains 101 blocks by changing the representation of the batch from a single block to consecutive blocks—parent_hash_size (32 bytes) * (101 blocks - 1 initial block).

Please refer to the exact encoding and rationales in the spec docs.