Chapter 20
Nitro Batcher

Requirements: Nitro Integration Requirements


Contents



20.1 Overview
20.2 Message Format
20.2.1 Upgrades
20.3 Key Management
20.4 Startup
20.5 Task 1: Sequencer Espresso
20.5.1 High Level Flow
Database
Transaction Submission Process
Finality Polling Process
20.5.2 Low Level Functions
SubmitTransactionsToEspresso Task
BuildHotShotPayloads
Finality Polling Process
CheckEspressoQueryNodesForTransaction
ResubmitTransactionIfPastDelay
SubmitType2Tx
20.6 Task 2: Espresso  L1
20.6.1 MaybePostSequencerBatch
20.6.2 Submission To Sequencer Inbox
Signing
Checkpoint
20.6.3 Resetting



20.1 # Overview

This page documents the technical design of the batcher component of the Arbitrum Nitro integration .

The batcher consists of two tasks, which form a pipeline: Task 1 reads messages from the sequencer and sends them to Espresso. Task 2 reads messages from Espresso and sends them to L1.

Note that Task 1 need not be trusted for safety/integrity: neither the existing Nitro stack nor our added requirements make any guarantees about what happens prior to a valid confirmation being posted to Espresso (i.e. the sequencer is already untrusted).

We thus focus most of our attention on Task 2, which is designed to be stateless, self-contained, and maximally simple while upholding all the requirements save Liveness, which actually is left up to the rollup sequencer and Task 1.

20.2 # Message Format

We have a few goals for the format in which messages are sent by Task 1: Sequencer Espresso, and from which Task 2: Espresso  L1 decodes them.

Flexibility

Support future versions of the format while retaining backwards compatibility

Large messages

Handle messages which are larger than Espresso’s maximum block size. Nitro has no explicit block size limit. There is an implicit limit based on the gas limit, but it is fairly large (at least 10 MB). Thus, we need to be able to handle the edge case where a message is broken into several Espresso transactions and then reconstructed when reading from Espresso.

The first property is solved with a type field. This allows us to handle different types of messages for different use cases, add new message types while remaining compatible with existing types, and add new format versions as new message types, deprecating old types.

The second property, handling of large messages, is dealt with using chunks. Each message confirmed by Espresso may reference other (earlier) Espresso transactions, whose data is concatenated with the data contained in the original message when reconstructing the message. This allows a large Nitro message to be spread out over multiple Espresso transactions and blocks.

Note: 

This mechanism makes it possible to commit a large Nitro message to Espresso; however the performance suffers. The message must be committed after all of its chunks, requiring double the latency to confirm first the chunks (in any order) and then the message. Thus, this mechanism is best used to handle occasional short bursts in rollup capacity, but is not a solution for rollups whose average capacity over a long period of time exceeds Espresso’s capacity.




Field

Size (bytes)

Description







Type

1

1 for a normal message




Position

8

Nitro message position




Signature

32

Sequencer signature over the message




Length

8

Bytes of payload data in this message




Data

Variable

Serialized message







Total

49 + Length




Table 20.1: Message type 1: Normal messages




Field

Size (bytes)

Description







Type

1

2 for a message with chunks




Position

8

Nitro message position




Signature

32

Sequencer signature over the message




NumChunks

1

Number of chunks corresponding to this message




ChunkRefs

16 × NumChunks

Chunk references corresponding to this message




PoW

8

Proof of work witness, used to prevent spam




Length

8

Bytes of payload data for this message in this Espresso transaction




Data

Variable

Payload data comprising part of the message







Total

58 + (16 × NumChunks)+Length




Table 20.2: Message type 2: Large messages with chunks




Field

Size (bytes)

Description







Type

1

3 for a data chunk




Data

Variable

Chunk data comprising part of a message







Total

Variable




Table 20.3: Message type 3: Chunks (ignored until referenced by a type 2 message)




Field

Size (bytes)

Description







Block

8

Espresso block number containing the chunk




Index

8

Index of chunk within its namespace







Total

16




Table 20.4: ChunkRef Format, Version 1

20.2.1 # Upgrades

This message format provides a path for upgrading the protocol for obtaining Espresso commitments and reading them back. New message types can always be added by:

1.
First upgrading the Nitro Espresso streamer (which handles reading from Espresso on all nodes, including the batcher and Nitro caffeinated node ) to handle the new type of message appropriately
2.
Upgrading the batcher to produce the new type of message when sending to Espresso, after all relevant nodes have updated their Espresso streamer

Warning: 

Once the batcher starts sending the new message type, any node which has not upgraded their Espresso streamer may derive the wrong state of the rollup, by ignoring messages of the new type. Upgrades should be communicated publicly, and directly to all relevant parties well in advance of the batcher upgrade.

In addition, it is possible to deprecate or replace a message type. For example, suppose a security vulnerability was discovered in message type 1 (normal messages) which is impossible to resolve without a format change. A new message type (say 4) can be created which conveys normal messages without the vulnerability, and then all Espresso streamers can be upgraded to handle both the insecure message type 1 and the new message type 4, with a rule that after they have seen the first instance of message type 4, they no longer accept type 1. This creates a version of the streamer which is backwards compatible and capable of handling the upgrade automatically once it is triggered.

The upgrade is then triggered by upgrading the batcher to produce message type 4 instead of message type 1. Finally, all nodes can be upgraded to remove the insecure dead code for handling message type 1.

20.3 # Key Management

The batcher maintains two sets of keys with different purposes:

Batcher Key

The key which is registered with the rollup chain config as the centralized batcher key. In this key is vested the authority to add batches to the L1, and thus the ultimate authority to determine the sequence of inputs processed by the rollup. Thus, this key also acts as a centralized sequencing key.

This key may exist outside of the TEE enclave running the batcher, although the private key will need to be passed into the enclave in order for it to function.

Ephemeral Key

A key generated inside the enclave which never leaves it. Thus, signatures from this key must originate inside the enclave. This is a way of proving some data originated from or was endorsed by the code running in the enclave. This is similar to producing a TEE attestation, but these signatures are cheaper to verify than the full TEE attestation.

The batcher must have both sets of keys in order to successfully post a batch; the former proves to the derivation pipeline that a batch is originating with the centralized sequencer, while the latter proves to the inbox contract that the batch is originating from within the TEE enclave.

20.4 # Startup

On startup the Nitro batcher will generate its ephemeral key, a random secp256k1 private and public key pair. The batcher will then send the public key of this keypair to the SequencerInbox contract, along with an attestation quote over the public key. This will be used by the sequencer inbox to skip validating the attestation quote with every batch.

Tech Debt: 

The assumption that fetching from a majority of the query nodes is a valid way to determine finality only holds if the majority of the query service nodes are honest, in the future we should verify the namespace and merkle proofs here.

The batch posters start creation method will use the geth crypto library to generate this key like so:
Listing 20.1: Key creation process
1    privateKey := crypto.GenerateKey() 
2    publicKey := privateKey.Public() 
3    ecdsaPubKey, err := publicKey.(*ecdsa.PublicKey) 
4    publicKeyAddress := crypto.PubkeyToAddress(ecdsaPubKey).Hex()

This key will then be sent to the Arbitrum Nitro sequencer inbox along with an attestation quote over the key. The sequener inbox will verify the attestation to this public key, and store it in a array of attested to keys that may post batches. After this, posting batches will require that the batcher includes a signature over some sort of data with one of these public keys, otherwise posting the batch should revert.

20.5 # Task 1: Sequencer Espresso

This task will have 2 sub processes one which handles submitting the transactions to espresso, and one which will poll submitted transactions for finality, and resubmit any that have not been finalized past a resubmission deadline. These processes will run in the current transaction streamer.

20.5.1 # High Level Flow

The overall flow of a transaction through Task 1, from sequencing to confirmation on HotShot, is as follows:

1.
The sequencer creates a message, signs it, and publishes it on the sequencer feed
2.
The batcher’s transaction streamer receives the message, verifies the signature, and stores both the message and the signature in its database
3.
The submission task picks up the message from the database.
(a)
In the common case, the entire message fits within a type 1 transaction. The submission task creates a type 1 transaction including this message, submits the transaction to HotShot, and stores the pending transaction in the database.
(b)
If the message is very large, the submission task creates one or more type 3 chunk transactions contaiing pieces of the message, and a single type 2 transaction referencing the chunks. It submits the chunk transactions and stores all of the type 2 and 3 transactions as pending transactions in its database.
4.
The resubmission task picks up the pending HotShot transaction from the database and queries for finality. If the transaction is not finalized quickly enough, the resubmission task will submit it to HotShot again.
5.
If the transaction is finalized and is a chunk transaction, the resubmission task looks up the corresponding type 2 transaction from the database and adds a reference to the finalized chunk transaction. If this is the last chunk transaction corresponding to the type 2 transaction, the resubmission task sends the type 2 transaction to Espresso.

PIC

Figure 20.1: Transaction flow

# Database

To facilitate communication between the Transaction Submission Process and the Finality Polling Process, and to enable the batcher to drive transaction submissions to completion even in case of restarts, three structures are added to the database to store information about pending transactions.

First, we store information about each pending transaction. This is used by the resubmission task to check the status of a transaction and resubmit if if necessary. The following data structure is stored as a list:

1    type SubmittedEspressoTx struct { 
2        Hash        string 
3        Payload     []byte 
4        SubmittedAt *time.Time 
5        Type2Tx     *MessageIndex 
6        ChunkIndex  *uint8 
7    }

The Type2Tx field is a reference to the type 2 transaction corresponding to this transaction, if this is a chunk transaction. If present, ChunkIndex is also present, and indicates the offset of this chunk transaction in the list of chunk references belonging to the type 2 transaction.

These fields are used to populate the list of chunk references in the type 2 transaction as its various chunk transactions get confirmed. Type2Tx references the second new data structure, a set of chunk reference lists, indexed by message position:

1    type PendingChunkRefs struct { 
2        Position      MessageIndex 
3        ChunkRefs     []ChunkRef 
4        MissingChunks uint8 
5    }

Finally, the third new data structure contains all the remaining information to combine with PendingChunkRefs to form a complete type 2 transaction, once all chunks have been confirmed. These are also indexed by position.

1    type PendingType2Tx { 
2        Position  MessageIndex 
3        Signature Signature 
4        Data      []byte 
5    }

In addition to these new data structures, the database will also store a singleton variable tracking the most latest submitted message position. This is used by the Transaction Submission Process to determine when new messages need to be submitted.

# Transaction Submission Process

This process will be responsible for consuming the backlog of message positions enqueued by the sequencer, crafting the payload to be sent to hotshot, and then enqueuing the message as submitted in the database.

# Finality Polling Process

This process will handle polling the transactions marked as submitted in the database for finality. In this implementation, finality only means that they can be fetched by hash from a majority of nodes on the query service. When we do this we can remove it from the submitted transactions queue, Otherwise we can resubmit.

Once a transaction has been confirmed, this process is also tasked with updating the corresponding type 2 transaction (if this is a chunk transaction) and submitting it if all of its chunks have been confirmed.

20.5.2 # Low Level Functions

# SubmitTransactionsToEspresso Task

This task runs as a polled function that is started when the transaction streamer is started

Listing 20.2: Start function of the Transaction Streamer
1func (s *TransactionStreamer) Start(ctxIn context.Context) error { 
2    s.StopWaiter.Start(ctxIn, s) 
3 
4    if s.lightClientReader != nil && s.espressoClient != nil { 
5        err = stopwaiter.CallIterativelyWith[struct{}](&s.StopWaiterSafe, s.submitTransactionsToEspresso, s.newSovereignTxNotifier) 
6        if err != nil { 
7            return err 
8        } 
9        err := stopwaiter.CallIterativelyWith[struct{}](&s.StopWaiterSafe, s.pollSubmittedTransactionForFinality, s.newSovereignTxNotifier) 
10        if err != nil { 
11            return err 
12        } 
13         
14    } 
15     
16}

submitTransactionsToEspresso exists mainly as a wrapper function for submitEspressoTransactions that handles logging errors and returning a polling interval for the StopWaiter.

Listing 20.3: StopWaiter wrapper for submitEspressoTransactions
1func (s *TransactionStreamer) submitTransactionsToEspresso(ctx context.Context, ignored struct{}) time.Duration { 
2    retryRate := s.espressoTxnsPollingInterval * 50 
3    err := s.submitEspressoTransactions(ctx) 
4    if err != nil { 
5        log.Error("failed to submit espresso transactions", "err", err) 
6        return retryRate 
7    } 
8    return s.espressoTxnsPollingInterval 
9}

The main logic for submission lives in this inner function submitEspressoTransactions. This function will use the new previously stored position and iterate from that messages positon + 1, till the current message count in the transaction streamer submitting all messages to Espresso, and then updating the latest submitted position in the database.

Listing 20.4: Main logic for transaction submission
1func (s *TransactionStreamer) submitEspressoTransactions(ctx context.Context) error { 
2    /* unless otherwise noted all errors in this listing are propagated to the caller, but this code is omitted for brevity. */ 
3 
4    lastSubmittedTxnsPos, err := s.getLastSubmittedTxnPos() 
5    if lastSubmittedTxnsPos == nil && err == nil{ 
6        // We are initializing and havent recently submitted anything. 
7        // Initialize last submitted as the head of the database, and start submitting txns to espresso 
8        lastSubmittedTxnsPos = s.getMessageCount() - 1 
9    } 
10    from := lastSubmittedTxnsPos + 1 
11    to := s.getMessageCount() - 1 
12    espressoTxs, chunkRefs, type2Txs := s.buildHotShotPayloads(from, to) 
13 
14    /* Submit transactions which dont depend on chunk references */ 
15    for i := range espressoTxs { 
16        tx := &espressoTxs[i] 
17        hash, err := s.espressoClient.SubmitTransaction(ctx, espressoTypes.Transaction{ 
18            Payload:   tx.Payload, 
19            Namespace: s.chainConfig.ChainID.Uint64(), 
20        }) 
21        tx.Hash = hash 
22        tx.SubmittedAt = time.Now() 
23    } 
24 
25    /* Update database */ 
26    s.espressoTxnsStateInsertionMutex.Lock() 
27    defer s.espressoTxnsStateInsertionMutex.Unlock() 
28    batch := s.db.NewBatch() 
29 
30    submittedTxns, err := s.getEspressoSubmittedTxns() 
31    err = s.setEspressoSubmittedTxns(append(submittedTxns, espressoTxs...)) 
32 
33    err = s.storePendingChunkRefs(chunkRefs) 
34    err = s.storePendingType2Txs(type2Txs) 
35    err = s.setLastSubmittedTxnPos(batch, to) 
36    err = batch.Write() 
37 
38    return nil 
39}

# BuildHotShotPayloads

buildHotShotPayloads constructs Espresso transactions for submitting a range of messages to HotShot. For each message starting at the given position, it reads from the database the message bytes and the sequencer signature. It then concatenates these for each message into a single buffer which forms the payload of the Espresso transaction. If necessary, it may split a large message into a number of Espresso transactions, some to confirm chunks and one defining the message and referencing the chunks.

Returns

Listing 20.5: Building the payload
1func (s *TransactionStreamer) buildHotShotPayloads(from MessageIndex, to MessageIndex) ([]SubmittedEspressoTx, []PendingChunkRefs, []PendingType2Tx){ 
2    /* unless otherwise noted all errors in this listing are propagated to the caller, but this code is omitted for brevity. */ 
3 
4    txs := nil 
5    currPayload := nil 
6    chunkRefs := nil 
7    type2Txs := nil 
8 
9    for p := from; p <= to; ++p { 
10        msgBytes, err := s.fetchMessage(p) 
11        sigBytes, err := s.fetchSequencerSignature(p) 
12 
13        /* If this message on its own doesnt fit in a single Espresso payload, break it into chunks. */ 
14        if 49 + len(msgBytes) > s.espressoMaxTransactionSize { 
15            /* Create chunk transactions until the remaining data fits in the main type 2 transaction. */ 
16            numChunks := 0 
17            offset := 0 
18            for 58 + (16*numChunks) + (len(msgBytes) - offset) > s.espressoMaxTransactionSize { 
19                chunkEnd = min(offset + s.espressoMaxTransactionSize - 1, len(msgBytes)) 
20                payload := serialize(ChunkMessage { 
21                    Data: msgBytes[offset:chunkEnd] 
22                }) 
23                chunkIndex := numChunks 
24                txs := append(txs, SubmittedEspressoTx { 
25                    Payload: payload, 
26                    Type2Tx: &p, 
27                    ChunkIndex: &chunkIndex, 
28                }) 
29                offset = chunkEnd 
30                ++numChunks 
31            } 
32 
33            /* Create a single type 2 transaction representing the overall message, and containing the remainder of the data. */ 
34            type2Txs = append(type2Txs, PendingType2Tx { 
35                Position:  p, 
36                Signature: sigBytes, 
37                Data:      msgBytes[offset:], 
38            }) 
39 
40            /* Create a place to record the chunk references after the chunk transactions finalize. */ 
41            chunkRefs = append(chunkRefs, PendingChunkRefs { 
42                Position:      p, 
43                ChunkRefs:     make([]ChunkRef, numChunks), 
44                MissingChunks: numChunks, 
45            }) 
46 
47            continue 
48        } 
49 
50        /* If this message doesnt fit in the current payload, close the payload and start a new transaction. */ 
51        if len(currPayload) + 49 + len(msgBytes) > s.espressoMaxTransactionSize { 
52            txs = append(txs, SubmittedEspressoTx { payload: currPayload }) 
53            currPayload = nil 
54        } 
55 
56        /* Serialize the message. */ 
57        serialized := serialize(Type1Message { 
58            Position: p, 
59            Signature: sigBytes, 
60            Data: msgBytes, 
61        }) 
62        currPayload = append(currPayload, serialized...) 
63    } 
64 
65    /* Make sure to include the final payload. */ 
66    if currPayload != nil { 
67        txs = append(txs, SubmittedEspressoTx { payload: currPayload }) 
68    } 
69    return txs, chunkRefs, type2Txs 
70 
71}

# Finality Polling Process

The primary functionality resides in checkSubmittedTransactionForFinality. For each submitted transaction, this function checks whether the payload is in a finalized Espresso block using checkEspressoQueryNodesForTransaction.

For each transaction that is included in a block, we can remove the submitted transaction from our database, and, if it is a chunk transaction, check if its parent type 2 transaction is now ready for submission.

For each transaction that is not yet included in a block, if enough time has passed from when it was submitted, we update its submitted timestamp in the database and resubmit the transaction.

1func (s *TransactionStreamer) checkSubmittedTransactionForFinality(ctx context.Context) error { 
2    /* unless otherwise noted all errors in this listing are propagated to the caller, but this code is omitted for brevity. */ 
3 
4    submittedTxns, err := s.getEspressoSubmittedTxns() 
5    confirmedTxns := nil 
6 
7    for i, submittedTxn := range submittedTxns { 
8        hash := submittedTxn.Hash 
9        data, err := s.checkEspressoQueryNodesForTransaction(ctx, submittedTxHash) 
10        if err != nil { 
11            /* If we are past the delay, resubmit the transaction in lower level function and return new espressoSubmittedTx that we put in the resubmitted list */ 
12            resubmitTransactionIfPastDelay(submittedTxn) 
13            continue 
14        } 
15 
16        confirmedTxns = append(confirmedTxns, submittedTxHash) 
17 
18        /* If this is a chunk transaction, update the parent type 2 transaction. */ 
19        if submittedTxn.Type2Tx != nil { 
20            chunkRefs, err := s.getPendingChunkRefs(*submittedTx.Type2Tx) 
21            chunkRefs[*submittedTx.ChunkIndex] = ChunkRef { 
22                Block: data.Height, 
23                Index: data.Index, 
24            } 
25            chunkRefs.MissingChunks -= 1 
26            if chunkRefs.MissingChunks == 0 { 
27                /* The type 2 tx is ready to be submitted */ 
28                type2tx, err := s.getPendingType2Tx(*submittedTx.Type2Tx) 
29                err = s.submitType2Tx(type2tx, chunkRefs) 
30                err = s.deletePendingType2Tx(*submittedTx.Type2Tx) 
31            } else { 
32                err = s.updatePendingChunkRefs(chunkRefs) 
33            } 
34        } 
35    } 
36    // We have checked all transactions for finality and resubmitted transactions that needed to be resubmitted. Update the db batch. 
37    s.espressoTxnsStateInsertionMutex.Lock() 
38    defer s.espressoTxnsStateInsertionMutex.Unlock() 
39 
40    batch := s.db.NewBatch() 
41 
42    submittedTxns, err := s.getEspressoSubmittedTxns 
43    unconfirmedTxns := nil 
44    for i, tx := range submittedTxns { 
45        if !confirmedTxns.Contains(tx.Hash) { 
46            unconfirmedTxns = append(unconfirmedTxns, tx) 
47        } 
48    } 
49    err = s.setEspressoSubmittedTxns(batch, unconfirmedTxns) 
50    err = batch.Write() 
51 
52    return nil 
53}

# CheckEspressoQueryNodesForTransaction

1func (s *TransactionStreamer) checkEspressoQueryNodesForTransaction(ctx context.Context, hash *types.TaggedBase64) error { 
2    numNodesWithTransaction := 0 
3    for queryUrl in s.queryUrls{ 
4        data, err := s.espressoClient.FetchTransactionByHash(ctx, submittedTxHash) 
5        numNodesWithTransaction += 1 
6    } 
7    if numNodesWithTransaction > len(s.queryUrls)/2{ 
8        return data, nil 
9    } else{ 
10        reutrn nil, fmt.ErrorF("Wasnt able to fetch transaction from the majority of query nodes") 
11    }

This function requires we also add some state to the transaction streamer in the form of an array of query node urls. This can be read in from the config when the transaction streamer struct is built

# ResubmitTransactionIfPastDelay

This function will take in the previously submitted txn and resubmit it like so

1func (s *TransactionStreamer) resubmitTransactionIfPastDelay(ctx context.Context, submittedTxn SubmittedEspressoTx) SubmittedEspressoTx{ 
2    timeSinceSubmission := time.Since(submittedTxn.SubmittedAt) 
3    if timeSinceSubmission > s.resubmitEspressoTxDeadline{ 
4        hash, err := s.espressoClient.SubmitTransaction(ctx, espressoTypes.Transaction{ 
5            Payload: submittedTxn.Payload, 
6            Namespace: submittedTxn.Namespace 
7        }) 
8        SubmittedAt := time.Now() 
9        if err != nil{ 
10            log.Warn("Failed to resubmit transaction with error", "error", err) 
11            return nil 
12        } 
13        //If we were successful in resubmitting, we build the new submitted tx object and return it to the caller. 
14        resubmittedTxn := SubmittedEspressoTx{ 
15            Hash: hash.String(), 
16            Payload: submittedTxn.Payload, 
17            SubmittedAt: SubmittedAt 
18        } 
19        return resubmittedTxn 
20    } 
21}

# SubmitType2Tx

This function constructs and submits to Espresso a type 2 transaction after all of its chunks transactions have been confirmed.

1    func (s *TransactionStreamer) submitType2Tx(chunks PendingChunkRefs, tx PendingType2Tx) error

The type 2 transaction is constructed as follows:

Once constructed, this transaction is serialized and submitted to Espresso like any normal transaction, and a new SubmittedEspressoTransaction object is created and stored in the database so that the Finality Polling Process can monitor the submitted type 2 transaction for confirmation.

20.6 # Task 2: Espresso  L1

The batch poster will use the Nitro Espresso streamer , which is analogous to the upstream transaction streamaer, to read an ordered stream of messages from Espresso. the task to relay messages from HotShot to L1 becomes a straightforward adaptation of maybePostSequencerBatch.

20.6.1 # MaybePostSequencerBatch

maybePostSequencerBatch will use the EspressoStreamer instead of the TransactionStreamer

In order to account for the batch cache being reset by any errors in maybePostSequencerBatch, we will check if it is nil at the start of maybePostSequencerBatch. If it is, we will reset the Espresso streamer to the last checkpoint, so that our state is constantly resyncing with L1.

Listing 20.6: Batch building loop utilizing espresso streamer
1for { 
2    msg, err := b.espressoStreamer.Next() 
3    if err != nil { 
4        return false, fmt.Errorf("error getting message from streamer: %w", err) 
5    } 
6    success, err := b.building.segments.AddMessage(msg) 
7    if err != nil { 
8        // Clear our cache 
9        b.building = nil 
10        return false, fmt.Errorf("error adding message to batch: %w", err) 
11    } 
12    if !success { 
13        // this batch is full 
14        if !config.WaitForMaxDelay { 
15            forcePostBatch = true 
16        } 
17        b.building.haveUsefulMessage = true 
18        if b.building.firstUsefulMsg == nil { 
19            b.building.firstUsefulMsg = msg 
20        } 
21        break 
22    } 
23    if b.firstUsefulMsg is past max delay{ 
24        //batch has message older than max delay, we should attempt to post. 
25        break 
26    } 
27}

Where b.building.segments.AddMessage() is the vanilla nitro function that adds messages to the batch currently being built.

b.fetchBatchPositions will be a function that queries the inbox trackers database to see what the latest message position was, and filters logs emitted by the sequencer inbox contract for events related to new hotshot heights it has seen.

When the batch is closed, we will send it to the SequencerInbox.

20.6.2 # Submission To Sequencer Inbox

# Signing

Batches will have an additional signature from the secp256k1 public key generated by the batch poster process when it is started as outlined in the startup section. This signature will be verified in the Nitro sequencer inbox as outlined in Arbitrum Nitro sequencer inbox .

To do so, we will use the crypto.Sign() function provided by the crypto package of geth. The digest provided to Sign() will be the result of calling keccak256() on the calldata and metadata provided to addSequencerL2Batch() In case of blobs, the Sign() function will result of calling keccak256() on the blobHashes and metadata provided to addSequencerL2BatchFromBlobs()

Listing 20.7: Attested key signature
1    func(batch poster) SignBatch (bytes[] data) (signature, error){ 
2        digest := keccak256(data) 
3 
4        signature := crypto.Sign(digest, b.attestedPrivateKey) 
5 
6        return signature 
7    }

# Checkpoint

Along with the batch and signature, we also persist on L1 a checkpoint of the Espresso streamer state, allowing the batcher after a restart, or any other batcher running concurrently, to sync their own streamer with the state of the L1 after every batch post.

A checkpoint consists of the Nitro message position and an Espresso block number (specifically, the lowest block number associated with any message in the streamer’s buffer). The message position of the last message in the batch is already encoded in the batch and recorded by the Arbitrum Nitro sequencer inbox . We get the appropriate Espresso block number from the Espresso streamer.

The Espresso block number is prepended to the byte array containing the signature. This is to avoid having to change the ABI of the SequencerInbox.

20.6.3 # Resetting

It is possible that the Nitro Espresso streamer becomes out of sync with the state of the inbox contract on L1. That is, the message position of the streamer may differ from the last message sent to L1. In this case, the batcher needs to resync its streamer state to a snapshot consistent with what is on L1. Notably, this is necessary every time the batcher starts up, but can also occur at runtime due to various exceptional cases.

This is possible, for example, if

All of these cases are handled by resetting the Nitro Espresso streamer to a checkpoint which was posted to L1 along with the last successful batch posted. This ensures that the streamer is now in sync with L1, and we can continue batch posting as normal from there.

Finding this checkpoint requires finding the last time a batch was committed to L1 via the Arbitrum Nitro sequencer inbox . This is done by scanning backwards from the latest L1 block looking for an event emitted from the inbox contract with the signature LastHotshotHeight(uint256). This event is emitted each time a new batch is committed, and it gives us the Espresso block height to use in constructing the Espresso streamer snapshot. The other piece of the snapshot is the Nitro message position, which can be read from the contract storage at the L1 block number where the event was emitted.

It is possible that no such event will be found, if the batcher is starting for the first time since the rollup enabled the Espresso integration. In this case the batcher config contains the Espresso streamer snapshot from the ”genesis” of the Espresso integration.