-
Notifications
You must be signed in to change notification settings - Fork 69
Emerald prover #822
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emerald prover #822
Conversation
WalkthroughAdds AltFee transaction support (0x7f) with encoding/signing, Emerald hardfork config, a queued batch execution service (/execute_batch + Executor), shadow-prove execution wiring, workspace Cargo reflow, deploy-config key updates, test fixtures and helper scripts, and small executor/verifier integrations. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Server
participant Queue as EXECUTE_QUEUE
participant Executor
participant RPC as L2_RPC
participant Verifier as Rust_Prover
Client->>Server: POST /execute_batch (ExecuteRequest)
Server->>Server: validate request
Server->>Queue: enqueue ExecuteRequest
Server-->>Client: 202 STARTED
loop Executor polling
Executor->>Queue: lock & pop request
alt request available
Executor->>RPC: fetch block traces (start..end)
RPC-->>Executor: BlockTrace[]
Executor->>Executor: optionally save traces to disk
Executor->>Verifier: run prover (no-proof mode) with traces
Verifier-->>Executor: success / error
else no request
Executor->>Executor: sleep & retry
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
prover/crates/core/src/hardfork.rs (1)
17-51: Verify EMERALD hardfork activation block heights.The EMERALD hardfork is set to block 0 for all networks (devnet, testnet, and mainnet). This enables it from genesis, which may not be the intended behavior for testnet and mainnet if this represents a future hardfork. Confirm that:
- Block 0 is appropriate for all three networks, or
- Different activation heights should be configured for testnet/mainnet
Additionally, line 53's comment refers to "Scroll networks" but this appears to be Morph configuration.
-/// Hardfork configuration for Scroll networks. +/// Hardfork configuration for Morph networks. #[derive(Debug, Default, Copy, Clone)] pub struct HardforkConfig {prover/testdata/verify.sh (2)
7-52: Consider adding signal handling for cleanup.The script uses
set -efor error handling but doesn't trap signals for cleanup. If a user interrupts the script (CTRL+C), the temporary file may not be cleaned up.Consider adding:
#!/bin/bash set -e # Cleanup function cleanup() { if [ -f "$TEMP_FILE" ]; then rm -f "$TEMP_FILE" fi } # Register cleanup on exit and interruption trap cleanup EXIT INT TERM # ... rest of script
115-139: Line 126 path is valid; specify--bin provefor clarity, though the default-run is already set.The
./configs/4844_trusted_setup.txtfile exists and is properly committed (409 KB). The workspace has adefault-run = "prove"configured inprover/bin/host/Cargo.toml, socargo run --releasewill correctly execute theprovebinary by default. However, explicitly specifying the binary improves clarity and maintainability:-RUST_LOG=info TRUSTED_SETUP_4844=./configs/4844_trusted_setup.txt cargo run --release -- --block-path "$OUTPUT_FILE_FOR_RUST" +RUST_LOG=info TRUSTED_SETUP_4844=./configs/4844_trusted_setup.txt cargo run --release --bin prove -- --block-path "$OUTPUT_FILE_FOR_RUST"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
prover/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
prover/Cargo.toml(2 hunks)prover/crates/core/src/hardfork.rs(6 hunks)prover/crates/primitives/src/types/tx.rs(1 hunks)prover/testdata/verify.sh(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (9)
prover/crates/primitives/src/types/tx.rs (1)
108-112: LGTM! Defensive deserialization for authorization_list.The addition of
#[serde(default)]complements the existingDefaultOnNullattribute, providing comprehensive handling for both missing and null authorization lists. This ensures backward compatibility when deserializing transaction traces that predate EIP-7702.prover/crates/core/src/hardfork.rs (4)
54-61: LGTM! Emerald block field added consistently.The
emerald_blockfield follows the established pattern for hardfork block tracking.
65-81: LGTM! Emerald block initialization follows established pattern.The
emerald_blockfield is correctly initialized fromHARDFORK_HEIGHTSusing the same pattern as other hardfork blocks.
101-105: LGTM! Setter implementation is consistent.The
set_emerald_blocksetter follows the established builder pattern used by other hardfork setters.
108-117: EMERALD fork ordering is correct but depends on proper configuration.The logic correctly positions EMERALD as the latest hardfork, returning it for all blocks >=
emerald_block. However, withemerald_blockset to 0 in the current configuration, this means all blocks will use the EMERALD spec, effectively skipping earlier fork logic.Ensure that fork activation heights are properly configured before deployment, as noted in the earlier comment about
HARDFORK_HEIGHTS.prover/testdata/verify.sh (3)
1-33: LGTM! Script setup and validation are solid.The script properly validates dependencies, arguments, and block number constraints.
84-113: LGTM! JSON validation and summary are well-implemented.The script properly validates the generated JSON, provides clear feedback, and handles cleanup appropriately.
54-82: Verify RPC method and JSON structure.
Line 62 uses
morph_getBlockTraceByNumberOrHashRPC method - confirm this method exists in the Morph RPC specification.Lines 72-74 wrap each trace in a nested array, creating the structure
[[trace1], [trace2], ...]. Verify this is the expected format for the Rust verifier. Typically, block traces would be flattened as[trace1, trace2, ...].#!/bin/bash # Search for usage of this RPC method in the codebase rg -n "morph_getBlockTraceByNumberOrHash" --type rust --type go --type js -C 2 # Look for documentation or tests that show expected JSON structure fd -e md -e txt -e json . | xargs rg -l "getBlockTrace" | head -5prover/Cargo.toml (1)
98-118: The emerald branch exists and is actively maintained.Verification confirms that the
emeraldbranch exists in the morph-l2/revm repository and has recent commits as of November 27, 2025, including protocol updates ("add osaka", "remove osaka spec") and maintenance work ("cargo fmt"), indicating ongoing active development and maintenance of this branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
prover/testdata/mainnet_batch_traces.json (1)
255-274: Fix invalid JSON: missing comma afterauthorizationListThe new
authorizationListfield is not followed by a comma:"accessList": [], "authorizationList":[] "v": "0x0",This makes the JSON invalid and will cause parsing to fail.
You should add a comma after
authorizationList:- "accessList": [], - "authorizationList":[] - "v": "0x0", + "accessList": [], + "authorizationList": [], + "v": "0x0",prover/bin/host/src/main.rs (1)
13-26: Tighten error handling around default input path andproveinvocation
- The default
block_path("./testdata/altfeetx/trace.json") does not exist in the repository and will cause an immediate panic because bothFile::openandserde_json::from_readerare unwrapped inload_trace. Either provide a valid fixture file, document the required path structure, or return a proper error fromload_traceinstead of panicking.prove(block_traces, args.prove).unwrap()will abort without context on any proving error. Use a proper error handler:if let Err(e) = prove(block_traces, args.prove) { eprintln!("prove failed: {e:?}"); std::process::exit(1); }
- Replace the debug
println!withlog::info!()sinceenv_loggeris already initialized.prover/bin/shadow-prove/src/lib.rs (1)
15-21: Fix mismatch between env var name and panic message forSHADOW_PROVING_PROVER_RPCThe static uses
SHADOW_PROVING_PROVER_RPC:Lazy::new(|| var("SHADOW_PROVING_PROVER_RPC").expect("Cannot detect PROVER_RPC env var"));but the panic text refers to
PROVER_RPC. This can confuse debugging. Suggest aligning the message with the actual env name, e.g.:-Lazy::new(|| var("SHADOW_PROVING_PROVER_RPC").expect("Cannot detect PROVER_RPC env var")); +Lazy::new(|| var("SHADOW_PROVING_PROVER_RPC") + .expect("Cannot detect SHADOW_PROVING_PROVER_RPC env var"));Also applies to: 23-30
prover/crates/primitives/src/types/tx.rs (1)
617-632: Panic on decode failure—propagate the error instead.The
unwrap()onTxEnvelope::decode_2718will panic if decoding fails. Propagate the error using?ormap_err.impl Decodable for TypedTransaction { fn decode(buf: &mut &[u8]) -> alloy::rlp::Result<Self> { if buf.is_empty() { return Err(alloy::rlp::Error::InputTooShort); } let tx_type = *buf.first().unwrap_or(&0u8); match tx_type { 0x7f => { - return Ok(TypedTransaction::AltFee( + Ok(TypedTransaction::AltFee( TxAltFee::decode_signed_fields(&mut &buf[1..]) .map_err(|_| alloy::rlp::Error::Custom("decode TxAltFee error"))?, )) } - _ => return Ok(TypedTransaction::Enveloped(TxEnvelope::decode_2718(buf).unwrap())), - }; + _ => Ok(TypedTransaction::Enveloped( + TxEnvelope::decode_2718(buf) + .map_err(|_| alloy::rlp::Error::Custom("decode TxEnvelope error"))?, + )), + } } }Also removed unnecessary
returnstatements and trailing semicolon for cleaner match arms.
🧹 Nitpick comments (19)
contracts/src/deploy-config/qanetl1.ts (1)
17-17: Confirm newprogramVkeyintent and static-scan handlingThis updates
programVkeyto a new shared value used across other deploy configs. Please double‑check it matches the verifier/prover setup you intend to deploy with, and that the previous key is no longer referenced on-chain or in tooling.Gitleaks flags this as a “Generic API Key”; if this is indeed a public verification key (as the name suggests), it’s fine to keep in repo, but consider adding a short comment or adjusting your secret‑scanning ignore rules for this field/path to avoid future noise.
If you expect to rotate this key again, you might also consider centralizing it in a single shared config module that all networks import from to avoid divergence.
contracts/src/deploy-config/hoodi.ts (1)
20-20: Confirm Hoodi’s verifier key and consider centralizing shared constantHoodi’s
programVkeynow matches the key used for other networks. Please confirm this is the correct verifier key for the Hoodi deployment and that no components still rely on the previous value.Given this constant is now duplicated across multiple configs, you might consider exporting it from a shared module and importing it here to reduce the risk of divergence later.
prover/crates/utils/src/macros.rs (1)
38-91: Dev logging macros are now complete no‑ops in dev/test buildsCommenting out the
$crate::tracing::...calls leavesdev_trace!,dev_info!,dev_error!,dev_debug!, anddev_warn!as silent no‑ops even whenfeature = "dev"ortestis enabled. If this is meant as a temporary measure to reduce log noise, that’s fine; otherwise, it may surprise callers expecting logs in dev/test.If the goal is to permanently disable these logs, consider either:
- Removing the macros entirely where unused, or
- Adding a clear comment (and possibly a feature flag) explaining that they intentionally do nothing now, so future readers don’t assume tracing still happens.
prover/testdata/verify.sh (1)
35-41: Make output paths independent of the caller’s working directory
OUTPUT_DIRandOUTPUT_FILEare defined relative to the current working directory, but later youcd "$(dirname "$0")/.."and then constructOUTPUT_FILE_FOR_RUST="./testdata/$OUTPUT_FILE". This only resolves correctly if the script was originally run fromprover/testdata; run from any other directory (e.g. repo root with./prover/testdata/verify.sh) and the generated JSON will live in one place while the prover looks in another.To make this robust regardless of where the script is invoked from, anchor paths to the script directory, e.g.:
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" OUTPUT_DIR="${SCRIPT_DIR}/generated" OUTPUT_FILE="${OUTPUT_DIR}/block_traces_${START_BLOCK}_${END_BLOCK}.json" TEMP_FILE="${OUTPUT_DIR}/temp_traces.json" # ... cd "${SCRIPT_DIR}/.." OUTPUT_FILE_FOR_RUST="testdata/generated/block_traces_${START_BLOCK}_${END_BLOCK}.json"This guarantees both the JSON generation and the Rust prover use the same on-disk path.
Also applies to: 121-126
prover/bin/server/src/lib.rs (1)
4-6: Consider centralizingread_env_varinstead of duplicating itThis
read_env_varimplementation is identical to the versions inshadow-prove::util,challenge::util,morph-executor::utils, and tests. Pulling it into a shared helper crate/module and reusing it here would reduce duplication and keep env parsing behavior consistent.Also applies to: 24-28
prover/bin/shadow-prove/src/util.rs (1)
5-17: Avoid redundant string cloning incall_proverYou already own both
prover_rpcandparam, so the extra clones can be dropped:- let prover_rpc = SHADOW_PROVING_PROVER_RPC.clone(); + let prover_rpc = SHADOW_PROVING_PROVER_RPC.clone(); - let client = reqwest::blocking::Client::new(); - let url = prover_rpc.to_owned() + function; + let client = reqwest::blocking::Client::new(); + let url = prover_rpc + function; - .body(param.clone()) + .body(param)This keeps the behavior the same while avoiding a couple of unnecessary allocations.
prover/crates/morph-executor/client/src/lib.rs (1)
9-13: Propagate all verifier/tx build errors instead of panicking withunwrap()Switching
EVMVerifier::verifyto?is good; howeverBlobVerifier::verifyandtx.try_build_typed_tx()still useunwrap(), so malformed blobs or tx traces will panic the client:let (versioned_hash, batch_data) = BlobVerifier::verify(&input.blob_info, num_blocks).unwrap(); … .flat_map(|tx| tx.try_build_typed_tx().unwrap().rlp())Consider propagating these as errors too (e.g. via
?and collecting aResult<Vec<u8>, _>), soverifyconsistently returnsErr(anyhow::Error)instead of aborting the process on invalid data.Also applies to: 29-35, 44-45
prover/crates/morph-executor/client/src/types/blob.rs (1)
78-83: Update the comment to include 0x7f.The comment on line 78 still references only
0x01, 0x02, 0x04, but now0x7f(AltFee) is also supported.} else { - // Support transaction types: 0x01, 0x02, 0x04 + // Support transaction types: 0x01, 0x02, 0x04, 0x7f (AltFee) if first_byte != 0x01 && first_byte != 0x02 && first_byte != 0x04 && first_byte != 0x7f {prover/bin/server/src/server.rs (1)
180-226: Implementation looks good, minor consideration for deduplication.The handler correctly validates input and enqueues requests. Consider extracting common validation logic shared with
add_pending_req(empty check, block range validation, RPC URL format, queue lock timeout) into a helper to reduce duplication.prover/bin/shadow-prove/src/main.rs (1)
16-16: Typo in function name:exceut_batchshould beexecute_batch.The function name
exceut_batchappears to be a typo forexecute_batch. This affects readability and maintainability.Consider renaming the function in
execute.rsand updating all references.prover/bin/shadow-prove/src/execute.rs (1)
12-16:ExecuteResultis defined but unused.This struct is defined but not used in the current implementation. The response from
call_proveris not parsed. If you intend to validate the execution result, consider deserializing and checking the response.prover/bin/shadow-prove/src/shadow_rollup.rs (1)
171-174: Consider documenting why 3 logs are required.The requirement for at least 3
commit_batchlogs may be confusing to future maintainers. Consider adding a comment explaining the rationale (e.g., needing the previous, current, and next batch commits to extract the correct batch header).prover/bin/server/src/execute.rs (2)
67-70: Comment says "spawn" but execution is sequential.The comment mentions spawning an async task, but
execute_batchis awaited inline, meaning batches are processed sequentially. If parallel processing is intended, usetokio::spawn. Otherwise, update the comment to reflect the sequential design.- // Spawn async task to handle the execution - let provider = self.provider.clone(); - execute_batch(req, provider).await; + // Execute batch sequentially + execute_batch(req, self.provider.clone()).await;
112-112: Prefer inclusive range syntax for clarity.Using
start_block..=end_blockis more idiomatic thanstart_block..end_block + 1.- for block_num in start_block..end_block + 1 { + for block_num in start_block..=end_block {prover/crates/core/src/executor/mod.rs (2)
357-368: Consider named constants for test expectations.The magic numbers in assertions (e.g.,
999999999999999000,199916000,84000) are hard to trace. Consider defining named constants or computing expected values inline to clarify the test logic.+ // Expected: initial 1 ETH (10^18 wei) minus transferred 1000 wei + let expected_eth_balance = 1_000_000_000_000_000_000u64 - 1000; + // Expected: initial 200M tokens minus gas fee (84000 tokens) + let expected_erc20_balance = 200_000_000u64 - 84_000; + let expected_gas_fee = 84_000u64; + assert!( - account_from_balance.to::<u64>() == 999999999999999000, + account_from_balance.to::<u64>() == expected_eth_balance, "Only 1000wei must have been transferred." ); - - assert!(erc20_value.to::<u64>() == 199916000, "Gas fees should use: 84,000"); - - assert!(erc20_value_vault.to::<u64>() == 84000, "Gas fees should use: 84,000"); + assert!(erc20_value.to::<u64>() == expected_erc20_balance, "Gas fees should use: 84,000"); + assert!(erc20_value_vault.to::<u64>() == expected_gas_fee, "Gas fees should use: 84,000");
328-331:pubvisibility unnecessary for test-only constants.These constants are defined within a
#[cfg(test)]module and don't needpubvisibility unless used by other test modules.- pub const L2_FEE_VAULT: Address = address!("0e87cd091e091562F25CB1cf4641065dA2C049F5"); - pub const L2_TOKEN_REGISTRY_ADDRESS: Address = + const L2_FEE_VAULT: Address = address!("0e87cd091e091562F25CB1cf4641065dA2C049F5"); + const L2_TOKEN_REGISTRY_ADDRESS: Address = address!("5300000000000000000000000000000000000021");prover/crates/primitives/src/types/tx.rs (1)
202-204: Redundant.to()call onfee_limit.The
fee_limitfield isOption<U256>, andunwrap_or_default()already returnsU256. The.to()call is unnecessary.fn fee_limit(&self) -> U256 { - self.fee_limit.unwrap_or_default().to() + self.fee_limit.unwrap_or_default() }prover/crates/primitives/src/types/tx_alt_fee.rs (2)
341-347: Remove commented-out code.These commented methods appear to be development leftovers. Remove them to keep the codebase clean.
fn authorization_list(&self) -> Option<&[SignedAuthorization]> { None } - // fn fee_token_id(&self) -> u16 { - // 0x7f - // } - // fn fee_limit(&self) -> u64 { - // 0x7f - // } }
407-417: Missing payload length validation after decoding.The
Decodableimplementation checks that the buffer has enough bytes before decoding, but doesn't verify that exactlypayload_lengthbytes were consumed. This could accept malformed inputs with trailing data.impl Decodable for TxAltFee { fn decode(data: &mut &[u8]) -> alloy::rlp::Result<Self> { let header = Header::decode(data)?; let remaining_len = data.len(); if header.payload_length > remaining_len { return Err(alloy::rlp::Error::InputTooShort); } - Self::decode_fields(data) + let tx = Self::decode_fields(data)?; + let consumed = remaining_len - data.len(); + if consumed != header.payload_length { + return Err(alloy::rlp::Error::ListLengthMismatch { + expected: header.payload_length, + got: consumed, + }); + } + Ok(tx) } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
prover/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (41)
contracts/src/deploy-config/holesky.ts(1 hunks)contracts/src/deploy-config/hoodi.ts(1 hunks)contracts/src/deploy-config/l1.ts(1 hunks)contracts/src/deploy-config/qanetl1.ts(1 hunks)contracts/src/deploy-config/sepolia.ts(1 hunks)contracts/src/deploy-config/testnetl1.ts(1 hunks)go-ethereum(1 hunks)prover/.gitignore(1 hunks)prover/Cargo.toml(5 hunks)prover/bin/host/build.rs(0 hunks)prover/bin/host/src/main.rs(2 hunks)prover/bin/server/src/execute.rs(1 hunks)prover/bin/server/src/lib.rs(1 hunks)prover/bin/server/src/server.rs(5 hunks)prover/bin/shadow-prove/src/execute.rs(1 hunks)prover/bin/shadow-prove/src/lib.rs(2 hunks)prover/bin/shadow-prove/src/main.rs(4 hunks)prover/bin/shadow-prove/src/shadow_rollup.rs(4 hunks)prover/bin/shadow-prove/src/util.rs(2 hunks)prover/crates/core/src/executor/mod.rs(2 hunks)prover/crates/morph-executor/client/src/lib.rs(1 hunks)prover/crates/morph-executor/client/src/types/blob.rs(1 hunks)prover/crates/morph-executor/client/src/verifier/blob_verifier.rs(1 hunks)prover/crates/morph-executor/client/src/verifier/evm_verifier.rs(2 hunks)prover/crates/primitives/src/lib.rs(4 hunks)prover/crates/primitives/src/types/mod.rs(1 hunks)prover/crates/primitives/src/types/tx.rs(12 hunks)prover/crates/primitives/src/types/tx_alt_fee.rs(1 hunks)prover/crates/utils/src/macros.rs(5 hunks)prover/testdata/altfeetx/block_traces_329475_329475.json(1 hunks)prover/testdata/altfeetx/block_traces_329791_329791.json(1 hunks)prover/testdata/altfeetx/trace_call.json(1 hunks)prover/testdata/altfeetx/trace_create.json(1 hunks)prover/testdata/altfeetx/trace_slot.json(1 hunks)prover/testdata/altfeetx/trace_slot_another.json(1 hunks)prover/testdata/mainnet_batch_traces.json(1 hunks)prover/testdata/verify.sh(1 hunks)prover/tests/algebra/host/build.rs(0 hunks)prover/tests/bls12381/host/build.rs(0 hunks)prover/tests/keccak256/host/build.rs(0 hunks)prover/tests/zstd/host/build.rs(0 hunks)
💤 Files with no reviewable changes (5)
- prover/tests/keccak256/host/build.rs
- prover/bin/host/build.rs
- prover/tests/bls12381/host/build.rs
- prover/tests/algebra/host/build.rs
- prover/tests/zstd/host/build.rs
✅ Files skipped from review due to trivial changes (7)
- prover/crates/morph-executor/client/src/verifier/blob_verifier.rs
- contracts/src/deploy-config/holesky.ts
- prover/testdata/altfeetx/trace_slot.json
- prover/testdata/altfeetx/block_traces_329475_329475.json
- prover/testdata/altfeetx/trace_slot_another.json
- prover/testdata/altfeetx/block_traces_329791_329791.json
- go-ethereum
🧰 Additional context used
🧬 Code graph analysis (11)
prover/bin/server/src/lib.rs (1)
prover/crates/morph-executor/client/src/verifier/evm_verifier.rs (1)
execute(23-70)
prover/crates/morph-executor/client/src/lib.rs (1)
prover/crates/morph-executor/client/src/verifier/evm_verifier.rs (1)
verify(17-20)
prover/bin/host/src/main.rs (1)
prover/bin/host/src/lib.rs (1)
prove(17-112)
prover/bin/server/src/server.rs (1)
prover/bin/server/src/execute.rs (1)
new(35-41)
prover/bin/shadow-prove/src/lib.rs (5)
prover/bin/server/src/lib.rs (1)
read_env_var(24-28)prover/bin/shadow-prove/src/util.rs (1)
read_env_var(37-41)prover/bin/challenge/src/util.rs (1)
read_env_var(35-39)prover/crates/morph-executor/utils/src/lib.rs (1)
read_env_var(3-7)prover/tests/algebra/host/src/main.rs (1)
read_env_var(60-64)
prover/bin/server/src/execute.rs (2)
prover/bin/server/src/lib.rs (1)
read_env_var(24-28)prover/bin/shadow-prove/src/util.rs (1)
read_env_var(37-41)
prover/bin/shadow-prove/src/execute.rs (1)
prover/bin/shadow-prove/src/util.rs (1)
call_prover(5-35)
prover/crates/primitives/src/lib.rs (2)
prover/crates/primitives/src/types/tx.rs (10)
fee_token_id(198-200)fee_token_id(279-281)fee_token_id(598-604)fee_limit(202-204)fee_limit(283-285)fee_limit(607-613)chain_id(174-176)chain_id(249-251)chain_id(289-295)chain_id(426-428)prover/crates/primitives/src/types/tx_alt_fee.rs (1)
chain_id(282-284)
prover/bin/shadow-prove/src/main.rs (4)
prover/bin/shadow-prove/src/execute.rs (1)
exceut_batch(18-36)prover/bin/shadow-prove/src/util.rs (2)
read_env_var(37-41)read_parse_env(43-50)oracle/oracle/batch.go (1)
BatchInfo(32-39)node/derivation/batch_info.go (1)
BatchInfo(48-63)
prover/bin/shadow-prove/src/shadow_rollup.rs (3)
oracle/oracle/batch.go (1)
BatchInfo(32-39)node/derivation/batch_info.go (1)
BatchInfo(48-63)bindings/hardhat/types.go (1)
Log(49-58)
prover/crates/primitives/src/types/tx_alt_fee.rs (3)
prover/crates/primitives/src/types/tx.rs (22)
rlp(558-566)decode(618-632)signature(194-196)signature(275-277)encode_2718(508-513)chain_id(174-176)chain_id(249-251)chain_id(289-295)chain_id(426-428)gas_limit(145-147)gas_limit(220-222)gas_limit(305-311)gas_limit(434-436)to(165-172)to(240-247)to(353-359)to(458-460)ty(137-139)ty(212-214)ty(377-383)ty(470-472)encode(488-495)prover/crates/primitives/src/lib.rs (15)
signature(190-190)signature(423-425)chain_id(52-52)chain_id(175-175)chain_id(313-315)chain_id(403-405)gas_limit(61-61)gas_limit(128-128)gas_limit(153-153)gas_limit(325-327)gas_limit(379-381)to(172-172)to(399-401)ty(147-147)ty(371-373)prover/crates/primitives/src/types/mod.rs (4)
chain_id(194-196)chain_id(260-262)gas_limit(204-206)gas_limit(272-274)
🪛 Gitleaks (8.30.0)
contracts/src/deploy-config/hoodi.ts
[high] 20-20: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
contracts/src/deploy-config/qanetl1.ts
[high] 17-17: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
contracts/src/deploy-config/testnetl1.ts
[high] 16-16: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
contracts/src/deploy-config/sepolia.ts
[high] 21-21: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
contracts/src/deploy-config/l1.ts
[high] 20-20: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: build
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (rust)
🔇 Additional comments (19)
contracts/src/deploy-config/sepolia.ts (1)
21-21: KeepprogramVkeyconsistent with verifier deployment & tame Gitleaks noiseThis
programVkeymatches the new value used in other network configs, which is good for consistency. Please just confirm it lines up with the verifier actually deployed on Sepolia and that there’s no remaining dependency on the old key.As with the other configs, Gitleaks’ “Generic API Key” finding here is probably benign if this is a public verification key, but you may want to whitelist this field or path to keep secret‑scanning signal high.
contracts/src/deploy-config/l1.ts (1)
20-20: Verify local L1 config uses the intendedprogramVkeyThe local L1 config now shares the same
programVkeyas other environments. Please confirm this matches the proving/verifier stack used in local tooling/tests so you don’t get subtle mismatches between dev and remote networks.Same note as elsewhere: Gitleaks’ “Generic API Key” hit looks expected for a public verification key, but you may want to explicitly ignore this field/path in your secret‑scanning config.
contracts/src/deploy-config/testnetl1.ts (1)
16-16: Align Testnet L1programVkeywith deployed verifierTestnet L1 is now wired to the same
programVkeyas the other configs. Please double‑check that the on‑chain verifier and any proving infra for this testnet are already using (or will use) this key, and that no tests still assume the old value.Gitleaks’ “Generic API Key” finding should be safe to ignore if this is a public verification key, but consider updating your secret‑scanning rules so this field doesn’t generate recurring false positives.
prover/.gitignore (1)
27-29: LGTM!The addition of
libandproofentries appropriately extends the ignore list to exclude generated build artifacts and prover outputs from version control, consistent with the existing pattern.prover/Cargo.toml (4)
13-13: Workspace membership restructured appropriately.The addition of test pattern membership and explicit excludes for
shadow-proveandchallengebinaries allows tests to benefit from workspace dependencies. This is a reasonable restructuring.Also applies to: 15-15
148-149: Remove or clarify the misleading alloy-core/alloy-primitives patch rationale.Lines 148-149 patch
alloy-coreandalloy-primitivesfromscroll-tech/alloy-core(v0.8.0 branch), not the main alloy package. The workspace still patches the mainalloyto v0.3.0, whileprover/bin/shadow-prove/Cargo.tomlexplicitly requiresalloy = { version = "0.8", ... }. Verify whether this version mismatch is intentional (e.g., shadow-prove is excluded or has a workaround) or if the workspace should be updated to alloy v0.8.Likely an incorrect or invalid review comment.
47-51: Verify custom sp1 fork branch and new feature flag.The sp1 crates have been pinned to the custom
morph-1.0branch from morph-l2/sp1, andsp1-sdkenables thenative-gnarkfeature. Both the branch and feature are confirmed to exist on this fork.
43-43: Verify c-kzg 1.0.3 has the "no-threads" feature.The review requests verification of features across scroll-tech dependencies:
- ✓
poseidon-bn254master branch: confirmed to havebn254feature- ✓
zktriemain branch: confirmed to havers_zktriefeature- ?
c-kzg1.0.3:no-threadsfeature could not be verified due to network timeoutsManual verification needed for the c-kzg "no-threads" feature to confirm it exists in version 1.0.3.
prover/testdata/altfeetx/trace_create.json (1)
40-61: AltFee trace fixture structure looks consistent with BlockTrace/TxTrace schemaThe transaction entry and executionResults include
feeTokenID/feeLimitand the usual block/header/storage fields, so this JSON should exercise the new AltFee path without obvious shape issues.Also applies to: 508-515
prover/crates/primitives/src/types/mod.rs (1)
9-10: Exportingtx_alt_feemodule is aligned with the new AltFee type usageThe doc comment and
pub mod tx_alt_fee;cleanly expose the new AltFee transaction definitions to the rest of the crate.prover/crates/morph-executor/client/src/verifier/evm_verifier.rs (1)
28-39: Explicitly executing the first block makes the verifier behavior clearerBuilding the executor from
traces[0]and then callingexecutor.handle_block(&traces[0])?before looping overtraces[1..]makes it explicit that all traces (including the first) are executed. This matches the intent of the root comparison logic and doesn’t introduce new edge‑case behavior (empty slice already panicked inbuild(&traces[0])).prover/bin/shadow-prove/src/main.rs (1)
302-303: Test updated correctly withtotal_txnfield.prover/bin/shadow-prove/src/shadow_rollup.rs (1)
357-358: Test correctly updated for new API.prover/bin/server/src/execute.rs (1)
20-27: Therpcfield inExecuteRequestis unused.The
rpcfield is defined but never used—theExecutoralways usesPROVER_L2_RPCfrom the environment. If per-request RPC configuration is intended, the field should be used when fetching block traces. Otherwise, consider removing it to avoid confusion.prover/crates/core/src/executor/mod.rs (1)
123-124: LGTM!The new
fee_token_idandfee_limitfields are correctly populated from the transaction's accessors, aligning with the AltFee feature integration.prover/crates/primitives/src/types/tx.rs (2)
26-27: LGTM!The new
AltFeevariant is cleanly integrated intoTypedTransaction, maintaining consistency with the existing enum structure.
288-408: LGTM!The
Transactiontrait implementation correctly delegates all methods to the innerTxAltFeeviatx.tx(), maintaining consistency with the existing pattern forEnvelopedandL1Msgvariants.prover/crates/primitives/src/types/tx_alt_fee.rs (2)
1-72: LGTM!The
TxAltFeestruct is well-documented and follows the established patterns for typed transactions. Field order in encoding/decoding is consistent, and the struct cleanly extends EIP-1559-style transactions withfee_token_idandfee_limit.
74-91: Theeffective_tip_per_gasmethod is provided by theTransactiontrait from the alloy library, whichTxAltFeeimplements (line 281). This method is inherited via the trait and does not need to be explicitly defined in theTxAltFeeimpl block. The code at line 549 intx.rswill compile correctly.Likely an incorrect or invalid review comment.
| loop { | ||
| thread::sleep(Duration::from_millis(12000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking thread::sleep in async context will block the runtime.
Using thread::sleep inside an async function blocks the entire executor thread, preventing other async tasks from making progress. Use tokio::time::sleep instead.
- thread::sleep(Duration::from_millis(12000));
+ tokio::time::sleep(Duration::from_millis(12000)).await;You can also remove the thread import from line 5 if no longer needed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| loop { | |
| thread::sleep(Duration::from_millis(12000)); | |
| loop { | |
| tokio::time::sleep(Duration::from_millis(12000)).await; |
🤖 Prompt for AI Agents
In prover/bin/server/src/execute.rs around lines 46 to 47, the code uses
thread::sleep(Duration::from_millis(12000)) inside an async context which blocks
the Tokio executor; replace it with
tokio::time::sleep(Duration::from_millis(12000)).await so the wait is async and
non-blocking, and remove the unused thread import on line 5 if it’s no longer
needed.
| #[allow(dead_code)] | ||
| fn save_trace(batch_index: u64, batch_traces: &Vec<BlockTrace>) { | ||
| let path = PROVER_PROOF_DIR.to_string() + format!("/batch_{}", batch_index).as_str(); | ||
| fs::create_dir_all(path.clone()).unwrap(); | ||
| let file = File::create(format!("{}/block_traces.json", path.as_str())).unwrap(); | ||
| let writer = BufWriter::new(file); | ||
|
|
||
| serde_json::to_writer_pretty(writer, &batch_traces).unwrap(); | ||
| log::info!("batch_traces of batch_index = {:#?} saved", batch_index); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unhandled errors in save_trace can panic the server.
The unwrap() calls on file operations will panic if directory creation or file writing fails. For a server application, log the error and continue gracefully.
-#[allow(dead_code)]
-fn save_trace(batch_index: u64, batch_traces: &Vec<BlockTrace>) {
+fn save_trace(batch_index: u64, batch_traces: &[BlockTrace]) {
let path = PROVER_PROOF_DIR.to_string() + format!("/batch_{}", batch_index).as_str();
- fs::create_dir_all(path.clone()).unwrap();
- let file = File::create(format!("{}/block_traces.json", path.as_str())).unwrap();
+ if let Err(e) = fs::create_dir_all(&path) {
+ log::error!("Failed to create directory {}: {}", path, e);
+ return;
+ }
+ let file = match File::create(format!("{}/block_traces.json", path)) {
+ Ok(f) => f,
+ Err(e) => {
+ log::error!("Failed to create trace file for batch {}: {}", batch_index, e);
+ return;
+ }
+ };
let writer = BufWriter::new(file);
-
- serde_json::to_writer_pretty(writer, &batch_traces).unwrap();
+ if let Err(e) = serde_json::to_writer_pretty(writer, batch_traces) {
+ log::error!("Failed to serialize traces for batch {}: {}", batch_index, e);
+ return;
+ }
log::info!("batch_traces of batch_index = {:#?} saved", batch_index);
}Also changed &Vec<BlockTrace> to &[BlockTrace] per Rust conventions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[allow(dead_code)] | |
| fn save_trace(batch_index: u64, batch_traces: &Vec<BlockTrace>) { | |
| let path = PROVER_PROOF_DIR.to_string() + format!("/batch_{}", batch_index).as_str(); | |
| fs::create_dir_all(path.clone()).unwrap(); | |
| let file = File::create(format!("{}/block_traces.json", path.as_str())).unwrap(); | |
| let writer = BufWriter::new(file); | |
| serde_json::to_writer_pretty(writer, &batch_traces).unwrap(); | |
| log::info!("batch_traces of batch_index = {:#?} saved", batch_index); | |
| fn save_trace(batch_index: u64, batch_traces: &[BlockTrace]) { | |
| let path = PROVER_PROOF_DIR.to_string() + format!("/batch_{}", batch_index).as_str(); | |
| if let Err(e) = fs::create_dir_all(&path) { | |
| log::error!("Failed to create directory {}: {}", path, e); | |
| return; | |
| } | |
| let file = match File::create(format!("{}/block_traces.json", path)) { | |
| Ok(f) => f, | |
| Err(e) => { | |
| log::error!("Failed to create trace file for batch {}: {}", batch_index, e); | |
| return; | |
| } | |
| }; | |
| let writer = BufWriter::new(file); | |
| if let Err(e) = serde_json::to_writer_pretty(writer, batch_traces) { | |
| log::error!("Failed to serialize traces for batch {}: {}", batch_index, e); | |
| return; | |
| } | |
| log::info!("batch_traces of batch_index = {:#?} saved", batch_index); | |
| } |
🤖 Prompt for AI Agents
In prover/bin/server/src/execute.rs around lines 133 to 141, replace the
panicking unwraps in save_trace with non-panicking error handling and change the
signature to accept a slice; specifically: change fn save_trace(batch_index:
u64, batch_traces: &Vec<BlockTrace>) to fn save_trace(batch_index: u64,
batch_traces: &[BlockTrace]) and avoid unwrap() by checking results from
fs::create_dir_all, File::create, and serde_json::to_writer_pretty; on any Err,
log a descriptive error via log::error including the error details and return
early (or return Result and propagate), ensuring the server does not panic when
IO/serialization fails.
| pub async fn exceut_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> { | ||
| // Request the proverServer to prove. | ||
| let request = ExecuteRequest { | ||
| batch_index: batch.batch_index, | ||
| start_block: batch.start_block, | ||
| end_block: batch.end_block, | ||
| rpc: "http://localhost:8545".to_owned(), | ||
| }; | ||
| let rt = tokio::task::spawn_blocking(move || { | ||
| util::call_prover(serde_json::to_string(&request).unwrap(), "/prove_batch") | ||
| }) | ||
| .await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded RPC URL and incorrect endpoint.
Two issues:
-
Line 24: The RPC URL
"http://localhost:8545"is hardcoded. This should likely use a configuration constant or environment variable for flexibility across environments. -
Line 27: The function calls
/prove_batchbut according toserver.rs, there's a separate/execute_batchendpoint. If this is intended as an execute operation, it should call/execute_batch.
+use crate::SHADOW_PROVING_PROVER_RPC;
+
pub async fn exceut_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> {
// Request the proverServer to prove.
let request = ExecuteRequest {
batch_index: batch.batch_index,
start_block: batch.start_block,
end_block: batch.end_block,
- rpc: "http://localhost:8545".to_owned(),
+ rpc: SHADOW_PROVING_PROVER_RPC.clone(),
};
let rt = tokio::task::spawn_blocking(move || {
- util::call_prover(serde_json::to_string(&request).unwrap(), "/prove_batch")
+ util::call_prover(serde_json::to_string(&request).unwrap(), "/execute_batch")
})
.await;Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/execute.rs around lines 18 to 29, replace the
hardcoded RPC URL and the wrong endpoint: read the RPC URL from configuration or
an environment variable (e.g., std::env::var("RPC_URL") with a sensible default)
instead of using "http://localhost:8545", and change the call path from
"/prove_batch" to the correct "/execute_batch" so the request targets the
execute endpoint.
prover/bin/shadow-prove/src/main.rs
Outdated
| // Execute batch | ||
| match exceut_batch(&batch_info).await { | ||
| Ok(_) => (), | ||
| Err(e) => { | ||
| log::error!("exceut_batch error: {:?}", e); | ||
| continue | ||
| } | ||
| } | ||
|
|
||
| // Sync & Prove | ||
| let result = match batch_syncer.sync_batch().await { | ||
| if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { | ||
| log::warn!("Too many blocks in the latest batch to shadow prove"); | ||
| continue; | ||
| } | ||
|
|
||
| if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { | ||
| log::warn!("Too many txn in the latest batch to shadow prove"); | ||
| continue; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validation should happen before calling exceut_batch.
The block/txn limit validation (lines 101-109) occurs after exceut_batch is called (lines 91-98). This means execution requests are sent to the prover server even for batches that will be skipped due to exceeding limits.
Move the validation before the execute call to avoid unnecessary work:
+ // Validate limits before execution
+ if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK {
+ log::warn!("Too many blocks in the latest batch to shadow prove");
+ continue;
+ }
+
+ if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN {
+ log::warn!("Too many txn in the latest batch to shadow prove");
+ continue;
+ }
+
// Execute batch
match exceut_batch(&batch_info).await {
Ok(_) => (),
Err(e) => {
log::error!("exceut_batch error: {:?}", e);
continue
}
}
// Sync & Prove
- if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK {
- log::warn!("Too many blocks in the latest batch to shadow prove");
- continue;
- }
-
- if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN {
- log::warn!("Too many txn in the latest batch to shadow prove");
- continue;
- }
let result = match batch_syncer.sync_batch(batch_info, batch_header).await {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Execute batch | |
| match exceut_batch(&batch_info).await { | |
| Ok(_) => (), | |
| Err(e) => { | |
| log::error!("exceut_batch error: {:?}", e); | |
| continue | |
| } | |
| } | |
| // Sync & Prove | |
| let result = match batch_syncer.sync_batch().await { | |
| if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { | |
| log::warn!("Too many blocks in the latest batch to shadow prove"); | |
| continue; | |
| } | |
| if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { | |
| log::warn!("Too many txn in the latest batch to shadow prove"); | |
| continue; | |
| } | |
| // Validate limits before execution | |
| if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { | |
| log::warn!("Too many blocks in the latest batch to shadow prove"); | |
| continue; | |
| } | |
| if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { | |
| log::warn!("Too many txn in the latest batch to shadow prove"); | |
| continue; | |
| } | |
| // Execute batch | |
| match exceut_batch(&batch_info).await { | |
| Ok(_) => (), | |
| Err(e) => { | |
| log::error!("exceut_batch error: {:?}", e); | |
| continue | |
| } | |
| } | |
| // Sync & Prove |
🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/main.rs around lines 91 to 109, the block/txn
limit validation (lines ~101-109) runs after exceut_batch is invoked, causing
execution requests to be sent for batches that will immediately be skipped; move
the two validations (check that end_block - start_block + 1 <=
SHADOW_PROVING_MAX_BLOCK and total_txn <= SHADOW_PROVING_MAX_TXN) to before the
call to exceut_batch, keeping the same log.warn messages and continue behavior
so that the function returns/continues early without calling exceut_batch for
oversized batches.
| let mut total_tx_count: u64 = 0; | ||
| for i in prev_bn + 1..current_bn + 1 { | ||
| total_tx_count += self | ||
| .l2_provider | ||
| .get_block_transaction_count_by_number(i.into()) | ||
| .await | ||
| .unwrap_or_default() | ||
| .unwrap_or_default(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent error handling may lead to incorrect transaction counts.
Using .unwrap_or_default() means RPC failures are silently treated as 0 transactions. This could result in total_tx_count being undercounted, potentially allowing batches that exceed SHADOW_PROVING_MAX_TXN to pass validation.
Consider logging a warning or returning an error when the RPC call fails.
for i in prev_bn + 1..current_bn + 1 {
- total_tx_count += self
+ match self
.l2_provider
.get_block_transaction_count_by_number(i.into())
.await
- .unwrap_or_default()
- .unwrap_or_default();
+ {
+ Ok(Some(count)) => total_tx_count += count,
+ Ok(None) => log::warn!("No tx count for block {}", i),
+ Err(e) => log::warn!("Failed to get tx count for block {}: {:?}", i, e),
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let mut total_tx_count: u64 = 0; | |
| for i in prev_bn + 1..current_bn + 1 { | |
| total_tx_count += self | |
| .l2_provider | |
| .get_block_transaction_count_by_number(i.into()) | |
| .await | |
| .unwrap_or_default() | |
| .unwrap_or_default(); | |
| } | |
| let mut total_tx_count: u64 = 0; | |
| for i in prev_bn + 1..current_bn + 1 { | |
| match self | |
| .l2_provider | |
| .get_block_transaction_count_by_number(i.into()) | |
| .await | |
| { | |
| Ok(Some(count)) => total_tx_count += count, | |
| Ok(None) => log::warn!("No tx count for block {}", i), | |
| Err(e) => log::warn!("Failed to get tx count for block {}: {:?}", i, e), | |
| } | |
| } |
🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/shadow_rollup.rs around lines 240 to 248, the
loop silently treats RPC failures as zero by using unwrap_or_default(), risking
undercounted total_tx_count; change the RPC call to propagate or handle errors
explicitly (e.g., use ? to return an error from the function or match the Result
and log a warning with context including the block number and error) so failures
do not get treated as zero transactions and the caller can fail or take
corrective action.
| [workspace.dependencies.revm] | ||
| git = "https://github.com/morph-l2/revm" | ||
| branch = "release/v42" | ||
| branch = "emerald" | ||
| default-features = false | ||
| features = [ | ||
| "morph-default-handler", | ||
| "morph-poseidon-codehash", | ||
| "std", | ||
| "optional_no_base_fee" | ||
| "optional_no_base_fee", | ||
| ] | ||
|
|
||
| [workspace.dependencies.revm-primitives] | ||
| git = "https://github.com/morph-l2/revm" | ||
| branch = "release/v42" | ||
| branch = "emerald" | ||
| default-features = false | ||
| features = [ | ||
| "morph-default-handler", | ||
| "morph-poseidon-codehash", | ||
| "std", | ||
| "optional_no_base_fee" | ||
| "optional_no_base_fee", | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if emerald branch exists in morph-l2/revm
echo "=== Checking emerald branch existence ==="
git ls-remote --heads https://github.com/morph-l2/revm emerald
# Verify the feature exists in Cargo.toml of the revm repository on emerald branch
echo -e "\n=== Checking features in revm crates/revm/Cargo.toml ==="
curl -s https://raw.githubusercontent.com/morph-l2/revm/emerald/crates/revm/Cargo.toml | grep -A 50 '\[features\]' | head -60
echo -e "\n=== Checking features in revm crates/primitives/Cargo.toml ==="
curl -s https://raw.githubusercontent.com/morph-l2/revm/emerald/crates/primitives/Cargo.toml | grep -A 50 '\[features\]' | head -60Repository: morph-l2/morph
Length of output: 3131
Fix incorrect feature flag: optional_no_base_fee does not exist as a standalone feature.
The emerald branch exists and is maintained in morph-l2/revm, which resolves the fork stability concern. However, the feature flag optional_no_base_fee is not available as a direct feature in either revm or revm-primitives on the emerald branch. The flag only exists within the dev feature group in primitives and is not exposed as a standalone feature. This will cause a build failure.
Remove optional_no_base_fee from the feature list or verify that the feature has been added to the emerald branch's Cargo.toml files.
🤖 Prompt for AI Agents
In prover/Cargo.toml around lines 106 to 126, the feature flag
optional_no_base_fee is not a standalone feature on the emerald branch and will
break the build; remove optional_no_base_fee from the features arrays for both
[workspace.dependencies.revm] and [workspace.dependencies.revm-primitives], or
if you actually need that behavior enable the correct feature group (e.g., the
dev feature group) that exposes it on the emerald branch—ensure the chosen
feature name matches what exists in the repo before committing.
| /// Get `fee_token_id`. | ||
| fn fee_token_id(&self) -> u16; | ||
|
|
||
| /// Get `fee_limit`. | ||
| fn fee_limit(&self) -> U256; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AltFee TxTrace wiring looks good; drop the stray println!
The new fee_token_id / fee_limit accessors on TxTrace, the 0x7f arm constructing TxAltFee, and the forwarding impl for &T are all consistent with the new TypedTransaction::AltFee support.
One thing to clean up is the debug print:
println!("tx.self.fee_token_id: {:?}", self.fee_token_id());This will spam stdout from a primitives crate in normal operation. Please remove it or switch to whatever structured logging macro you use elsewhere (and gate it appropriately).
Also applies to: 279-295, 427-433
🤖 Prompt for AI Agents
In prover/crates/primitives/src/lib.rs around lines 197-202 (and also
remove/update identical occurrences at 279-295 and 427-433), there's a stray
println!("tx.self.fee_token_id: {:?}", self.fee_token_id()) that will spam
stdout; delete this println or replace it with the project's structured logging
macro (e.g. debug! or trace!) and ensure it's gated behind the appropriate log
level so it only emits when debugging is enabled.
| /// Calculates a heuristic for the in-memory size of the [TxEip1559] transaction. | ||
| #[inline] | ||
| pub fn size(&self) -> usize { | ||
| mem::size_of::<ChainId>() + // chain_id | ||
| mem::size_of::<u64>() + // nonce | ||
| mem::size_of::<u64>() + // gas_limit | ||
| mem::size_of::<u128>() + // max_fee_per_gas | ||
| mem::size_of::<u128>() + // max_priority_fee_per_gas | ||
| self.to.size() + // to | ||
| mem::size_of::<U256>() + // value | ||
| self.access_list.size() + // access_list | ||
| self.input.len() + // input | ||
| mem::size_of::<u16>()+ // fee_token_id | ||
| mem::size_of::<u64>() // fee_limit | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect field sizes in size() calculation.
The size() method uses incorrect sizes for several fields:
gas_limitisu128but sized asu64(line 269)fee_limitisU256but sized asu64(line 277)
pub fn size(&self) -> usize {
mem::size_of::<ChainId>() + // chain_id
mem::size_of::<u64>() + // nonce
- mem::size_of::<u64>() + // gas_limit
+ mem::size_of::<u128>() + // gas_limit
mem::size_of::<u128>() + // max_fee_per_gas
mem::size_of::<u128>() + // max_priority_fee_per_gas
self.to.size() + // to
mem::size_of::<U256>() + // value
self.access_list.size() + // access_list
self.input.len() + // input
mem::size_of::<u16>()+ // fee_token_id
- mem::size_of::<u64>() // fee_limit
+ mem::size_of::<U256>() // fee_limit
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Calculates a heuristic for the in-memory size of the [TxEip1559] transaction. | |
| #[inline] | |
| pub fn size(&self) -> usize { | |
| mem::size_of::<ChainId>() + // chain_id | |
| mem::size_of::<u64>() + // nonce | |
| mem::size_of::<u64>() + // gas_limit | |
| mem::size_of::<u128>() + // max_fee_per_gas | |
| mem::size_of::<u128>() + // max_priority_fee_per_gas | |
| self.to.size() + // to | |
| mem::size_of::<U256>() + // value | |
| self.access_list.size() + // access_list | |
| self.input.len() + // input | |
| mem::size_of::<u16>()+ // fee_token_id | |
| mem::size_of::<u64>() // fee_limit | |
| } | |
| /// Calculates a heuristic for the in-memory size of the [TxEip1559] transaction. | |
| #[inline] | |
| pub fn size(&self) -> usize { | |
| mem::size_of::<ChainId>() + // chain_id | |
| mem::size_of::<u64>() + // nonce | |
| mem::size_of::<u128>() + // gas_limit | |
| mem::size_of::<u128>() + // max_fee_per_gas | |
| mem::size_of::<u128>() + // max_priority_fee_per_gas | |
| self.to.size() + // to | |
| mem::size_of::<U256>() + // value | |
| self.access_list.size() + // access_list | |
| self.input.len() + // input | |
| mem::size_of::<u16>()+ // fee_token_id | |
| mem::size_of::<U256>() // fee_limit | |
| } |
| set -e | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n prover/testdata/verify.sh | head -150Repository: morph-l2/morph
Length of output: 5002
🏁 Script executed:
cat > /tmp/test_set_e.sh << 'EOF'
#!/bin/bash
set -e
echo "Test 1: Command substitution with failing command"
VAR=$(false)
echo "After failed command substitution, VAR='$VAR', \$?=$?"
echo "Script did not abort!"
echo ""
echo "Test 2: Direct failed command"
false
echo "This should not print"
EOF
bash /tmp/test_set_e.sh
echo "Exit code: $?"Repository: morph-l2/morph
Length of output: 120
🏁 Script executed:
cat > /tmp/test_set_e_detailed.sh << 'EOF'
#!/bin/bash
set -e
echo "Starting test with set -e"
RESULT=$(false)
echo "After failed command substitution in assignment"
echo "RESULT='$RESULT'"
echo "Still running!"
EOF
bash /tmp/test_set_e_detailed.sh 2>&1
echo "Script exit code: $?"Repository: morph-l2/morph
Length of output: 103
🏁 Script executed:
# Check if there are any related issues, test results, or comments
rg -i "verify\.sh|cast rpc|morph_getBlockTrace" --max-count=20 -B2 -A2Repository: morph-l2/morph
Length of output: 1486
🌐 Web query:
cast rpc foundry exit code behavior error handling
💡 Result:
Short answer
- cast rpc: it performs a raw JSON-RPC call and prints the provider response; on transport/JSON errors it will print the error and the process will exit non‑zero. [1][2]
- Caveat: some cast subcommands historically did not surface missing/invalid RPC configuration as a non‑zero failure (e.g. cast logs returned empty output instead of an error) — this has been filed/handled as a Foundry bug in the repo. So behavior can vary by subcommand/version. [3]
What to do if you need reliable exit codes
- Use a recent Foundry release (or check changelog) — many cast fixes land in point releases. [4]
- Inspect stdout/stderr and the process exit status in your wrapper script; treat empty output as a potential RPC/configuration failure if you rely on it.
- For scripting, prefer explicit RPC flags (--rpc-url/ETH_RPC_URL) and test the endpoint first (e.g., cast rpc web3_clientVersion). [2]
Sources
- cast rpc reference (per‑command docs showing rpc usage/options). [2]
- Foundry cast reference / getfoundry docs (shows rpc options and typical behavior). [4]
- GitHub issue: "cast logs doesn't fail when the RPC URL is not set" (example where a subcommand silently returned empty output). [3]
🏁 Script executed:
cat > /tmp/test_if_set_e.sh << 'EOF'
#!/bin/bash
set -e
echo "Test: if statement with failing command in condition"
if RESULT=$(false); then
echo "Condition succeeded"
else
echo "Condition failed"
fi
echo "Script continued after if statement!"
EOF
bash /tmp/test_if_set_e.sh 2>&1
echo "Exit code: $?"Repository: morph-l2/morph
Length of output: 177
🏁 Script executed:
cat > /tmp/simulate_verify.sh << 'EOF'
#!/bin/bash
set -e
echo "Simulating current verify.sh pattern"
echo ""
SUCCESS_COUNT=0
FAIL_COUNT=0
for ((block=1; block<=3; block++)); do
echo -n "Block $block: "
# Simulate cast rpc - fails on block 2
if [ $block -eq 2 ]; then
TRACE=$(false 2>/dev/null) # Simulating cast rpc failure
else
TRACE="success_data"
fi
if [ $? -eq 0 ] && [ -n "$TRACE" ]; then
echo "✓ Success"
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
else
echo "✗ Failed"
FAIL_COUNT=$((FAIL_COUNT + 1))
fi
done
echo ""
echo "Results: SUCCESS=$SUCCESS_COUNT, FAIL=$FAIL_COUNT"
EOF
bash /tmp/simulate_verify.sh 2>&1
echo "Exit code: $?"Repository: morph-l2/morph
Length of output: 136
set -e causes script to abort on first RPC failure instead of continuing with error tracking
When cast rpc fails and returns a non-zero exit code, the assignment TRACE=$(cast rpc ...) on line 63 triggers the set -e exit immediately, terminating the loop before the if [ $? -eq 0 ] check on line 65 is reached. This prevents the FAIL_COUNT logic from executing and stops processing of remaining blocks.
Wrap the assignment in an if statement to suppress set -e for that command and allow the loop to continue:
if TRACE=$(cast rpc morph_getBlockTraceByNumberOrHash "$BLOCK_HEX" --rpc-url "$RPC_URL" 2>/dev/null); then
if [ -n "$TRACE" ] && [ "$TRACE" != "null" ]; then
if [ "$FIRST" = false ]; then
echo "," >> "$TEMP_FILE"
fi
FIRST=false
echo -n " $TRACE" >> "$TEMP_FILE"
echo "✓ Success"
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
else
echo "✗ Failed (empty or null response)"
FAIL_COUNT=$((FAIL_COUNT + 1))
fi
else
echo "✗ Failed (RPC error)"
FAIL_COUNT=$((FAIL_COUNT + 1))
fiAlso applies to: 56-81
🤖 Prompt for AI Agents
In prover/testdata/verify.sh around lines 7 and 56-81, the global set -e causes
the script to exit immediately when assigning TRACE=$(cast rpc ...), preventing
the subsequent if/FAIL_COUNT logic from running; change the code to wrap the
cast rpc assignment in an if statement (if TRACE=$(cast rpc ...); then ... else
... fi) so the command's failure is handled locally, increment FAIL_COUNT on RPC
errors, and preserve existing handling for empty/null responses so the loop
continues for remaining blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
prover/crates/primitives/src/types/tx.rs (1)
616-632: Fix RLP decode semantics for AltFee and remove unsafeunwrap()onTxEnvelope::decode_2718In
Decodable for TypedTransaction, the AltFee branch does not advance the originalbuf(it decodes from a temporary slice&buf[1..]), and the envelope branch usesunwrap()which can panic on invalid input:
- The AltFee path violates the Decodable contract where
bufmust be consumed. Callers expecting sequential decoding will seebufstill pointing at the transaction start.- The
unwrap()onTxEnvelope::decode_2718can panic on malformed RLP instead of returning an error, creating a security risk for untrusted input.Fix by creating a separate
bodyslice for AltFee decoding, then advancingbufwith the remaining bytes, and by propagating the envelope error via?:impl Decodable for TypedTransaction { fn decode(buf: &mut &[u8]) -> alloy::rlp::Result<Self> { if buf.is_empty() { return Err(alloy::rlp::Error::InputTooShort); } - let tx_type = *buf.first().unwrap_or(&0u8); - match tx_type { - 0x7f => { - return Ok(TypedTransaction::AltFee( - TxAltFee::decode_signed_fields(&mut &buf[1..]) - .map_err(|_| alloy::rlp::Error::Custom("decode TxAltFee error"))?, - )) - } - _ => return Ok(TypedTransaction::Enveloped(TxEnvelope::decode_2718(buf).unwrap())), - }; + let tx_type = buf[0]; + match tx_type { + 0x7f => { + let mut body = &buf[1..]; + let signed = TxAltFee::decode_signed_fields(&mut body) + .map_err(|_| alloy::rlp::Error::Custom("decode TxAltFee error"))?; + *buf = body; + Ok(TypedTransaction::AltFee(signed)) + } + _ => { + let envelope = TxEnvelope::decode_2718(buf)?; + Ok(TypedTransaction::Enveloped(envelope)) + } + } } }
♻️ Duplicate comments (4)
prover/bin/server/src/server.rs (1)
51-63:start_executor()will never run whilestart_prover()blocks; run them concurrentlyBecause
start_prover().awaitwaits onprove_for_queue()(which is effectively a long-running loop), execution never reachesstart_executor().await. This is the same issue noted in the previous review and still needs fixing; both long-lived tasks should run concurrently and keepstart()alive.You can make them run together via
tokio::join!:- // Step3. start prover - start_prover().await; - - // Step4. start executor - start_executor().await; + // Step3/4. start prover and executor concurrently + tokio::join!(start_prover(), start_executor());This preserves the “run forever” behavior while ensuring both queues are serviced.
prover/bin/shadow-prove/src/shadow_rollup.rs (2)
195-197: Fix inverted block range validation logic.The condition
blocks.0 >= blocks.1incorrectly rejects valid single-block batches. Whenstart_block == end_block, the range contains exactly one block (e.g., blocks 100-100 represents block 100), which is valid. The validation should only error whenstart_block > end_block.Apply this diff to fix the validation:
- if blocks.0 >= blocks.1 { + if blocks.0 > blocks.1 { return Err(String::from("blocks is empty")); }
240-248: Avoid silent error handling for transaction count retrieval.Using chained
.unwrap_or_default()calls means RPC failures are silently treated as 0 transactions. This could result intotal_tx_countbeing undercounted, potentially allowing batches that exceedSHADOW_PROVING_MAX_TXNto pass validation.Apply this diff to handle errors explicitly:
let mut total_tx_count: u64 = 0; for i in prev_bn + 1..current_bn + 1 { - total_tx_count += self + match self .l2_provider .get_block_transaction_count_by_number(i.into()) .await - .unwrap_or_default() - .unwrap_or_default(); + { + Ok(Some(count)) => total_tx_count += count, + Ok(None) => { + log::warn!("No tx count for block {}", i); + } + Err(e) => { + log::error!("Failed to get tx count for block {}: {:?}", i, e); + return None; + } + } }prover/bin/shadow-prove/src/main.rs (1)
91-109: Move validation before batch execution to avoid unnecessary work.The block/txn limit validation (lines 101-109) occurs after
execute_batchis called (lines 91-98). This means execution requests are sent to the prover server even for batches that will be skipped due to exceeding limits.Apply this diff to move the validation before execution:
+ // Validate limits before execution + if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { + log::warn!("Too many blocks in the latest batch to shadow prove"); + continue; + } + + if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { + log::warn!("Too many txn in the latest batch to shadow prove"); + continue; + } + // Execute batch match execute_batch(&batch_info).await { Ok(_) => (), Err(e) => { log::error!("execute_batch error: {:?}", e); continue } } // Sync & Prove - if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { - log::warn!("Too many blocks in the latest batch to shadow prove"); - continue; - } - - if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { - log::warn!("Too many txn in the latest batch to shadow prove"); - continue; - } let result = match batch_syncer.sync_batch(batch_info, batch_header).await {
🧹 Nitpick comments (7)
prover/bin/server/src/server.rs (2)
39-44: Consider handlingExecutor::newfailures instead of panicking withunwrap()
EXECUTE_QUEUEandstart_executor()mirror the existing prover setup, which is good for consistency. However,Executor::new(Arc::clone(&EXECUTE_QUEUE)).unwrap()will bring down the whole process on configuration errors (e.g., badPROVER_L2_RPC), and now this risk exists in two places (prover and executor).If you want a more robust startup, consider logging and failing the task gracefully instead of panicking:
-async fn start_executor() { - let mut executor = Executor::new(Arc::clone(&EXECUTE_QUEUE)).unwrap(); - executor.execute_for_queue().await; -} +async fn start_executor() { + let mut executor = match Executor::new(Arc::clone(&EXECUTE_QUEUE)) { + Ok(executor) => executor, + Err(e) => { + log::error!("failed to initialize executor: {e:#}"); + return; + } + }; + executor.execute_for_queue().await; +}You could apply a similar pattern to
start_prover()if desired for symmetry.Also applies to: 96-104
65-79:/execute_batchwiring and validation mirror prove handler; consider deduping shared logicThe new
/execute_batchroute andadd_execute_reqhandler look correct and closely paralleladd_pending_req:
- Empty body check.
- JSON deserialization into
ExecuteRequest.end_block >= start_blockand block-count ≤MAX_PROVE_BLOCKS.- Basic RPC URL scheme validation.
- Timed
EXECUTE_QUEUElock with small queue-size cap.This symmetry is good for predictability. Two optional refinements you might consider:
- Extract shared validation (block range + RPC URL + queue-length checks) into a helper used by both
add_pending_reqandadd_execute_reqto reduce duplication.- Double‑check that the client expectations around timeout errors match
"queue is busy"here vs.task_status::PROVINGin the prove path; if they’re consumed by the same system, aligning these responses might simplify handling.Functionally this looks solid as-is.
Also applies to: 180-226
prover/bin/shadow-prove/src/shadow_rollup.rs (1)
143-220: Consider edge cases in get_committed_batch logic.The method retrieves the second-to-last commit batch log (line 177:
logs.len() - 2), which assumes there are always at least 3 logs. While there's a check at line 171-174, consider these edge cases:
- What happens if logs are out of order before sorting?
- Should the method handle the case where
batch_indexretrieved is already proven or in progress?- The 600-block lookback window (line 153) might miss batches if commit frequency is low.
Consider adding:
- Additional validation after sorting to ensure logs are sequential
- A check against
is_prove_successbefore returning the batch- Configuration for the lookback window size
prover/crates/core/src/executor/mod.rs (2)
341-376: Clarify the fee calculation logic and document magic numbers.The test uses several magic numbers (31000 gas limit, 84000 expected gas fee, 250000000 price ratio) without clear explanation:
- The comment states "1 ETH = 4000 USDT" but the relationship to
price_ratio_value = 250000000is unclear.- The expected gas consumption of 84,000 tokens seems high for a simple transfer (base cost 21,000 + overhead).
- How is the gas fee calculated:
gas_used * gas_price * price_ratio / scale?Consider adding inline comments explaining the calculation:
fn alt_fee_normal() { let account_from = address!("f39Fd6e51aad88F6F4ce6aB8827279cffFb92266"); let account_to = address!("70997970C51812dc3A010C7d01b50e0d17dc79C8"); - // use erc20 gas token for txn. + // Use ERC20 gas token for transaction with token_id = 1 + // Price ratio: 250000000 represents 1 ETH = 4000 USDT (scale_value = 1) + // Expected gas: 21000 (base) + overhead = ~31000 gas + // Token fee: gas_used * gas_price * price_ratio / (10^decimals * scale) = 84000 let tx = TxEnv {
399-399: Remove commented-out code.The commented line
// cache_db.insert_contract(token_account);should be removed if it's no longer needed, or uncommented with an explanation if it's required.- // cache_db.insert_contract(token_account); cache_db.insert_account_info(token_account, token_account_info.clone());prover/crates/primitives/src/types/tx.rs (2)
118-123: Clarify default semantics for AltFee fee fields and simplify Option handlingThe new
fee_token_id/fee_limitfields and TxTrace accessors look reasonable, but a couple of nits:
- Both
TransactionTraceandArchivedTransactionTracedefaultfee_token_id/fee_limitto0/U256::ZEROwhen the option isNone. If0/0is a sentinel meaning “no AltFee / native token”, it’s worth making that explicit in a comment to avoid confusion with a real token ID of0.- In
TransactionTrace::fee_limit(Line 202),unwrap_or_default().to()is redundant given the return type is alreadyU256. You can simplify and align with the archived variant:- fn fee_token_id(&self) -> u16 { - self.fee_token_id.unwrap_or_default() - } - - fn fee_limit(&self) -> U256 { - self.fee_limit.unwrap_or_default().to() - } + fn fee_token_id(&self) -> u16 { + self.fee_token_id.unwrap_or_default() + } + + fn fee_limit(&self) -> U256 { + self.fee_limit.unwrap_or_default() + }Similarly,
ArchivedTransactionTrace::fee_token_id/fee_limitcan useunwrap_or_default()for symmetry if you prefer.Also applies to: 198-204, 279-285
516-525: Align AltFee encoding with existing 2718 encoding patterns and reduce manual plumbingAltFee support in the helper methods looks good overall (
tx_hash, signer recovery,effective_gas_price,signature_hash,data). One area that could be simplified is the RLP/2718 encoding path:match self { TypedTransaction::Enveloped(tx) => tx.encode_2718(&mut bytes), TypedTransaction::L1Msg(tx) => tx.encode_2718(&mut bytes), TypedTransaction::AltFee(tx) => tx.tx().encode_2718(tx.signature(), &mut bytes), }If
Signed<TxAltFee>implementsEncodable2718(it usually can viaSignableTransaction), you can keep the API symmetric with the other variants and avoid manually threading the signature:- TypedTransaction::AltFee(tx) => tx.tx().encode_2718(tx.signature(), &mut bytes), + TypedTransaction::AltFee(tx) => tx.encode_2718(&mut bytes),If that impl does not exist yet, adding it in the AltFee module would keep all 2718 encoding logic concentrated there and make
TypedTransaction::rlpa simple dispatcher.Please verify against the
alloy0.3.0Signed<T>/Encodable2718/SignableTransactionAPIs that this refactor is valid and thatencode_2718for AltFee indeed emits the0x7ftype byte followed by the correct RLP payload.Also applies to: 533-535, 548-551, 563-564, 573-574, 588-589
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
prover/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
prover/bin/server/src/server.rs(5 hunks)prover/bin/shadow-prove/src/main.rs(4 hunks)prover/bin/shadow-prove/src/shadow_rollup.rs(5 hunks)prover/crates/core/src/executor/mod.rs(2 hunks)prover/crates/primitives/src/types/tx.rs(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
prover/bin/shadow-prove/src/shadow_rollup.rs (2)
oracle/oracle/batch.go (1)
BatchInfo(32-39)bindings/hardhat/types.go (1)
Log(49-58)
prover/bin/server/src/server.rs (3)
prover/bin/shadow-prove/src/shadow_rollup.rs (1)
new(34-45)prover/bin/server/src/queue.rs (1)
new(38-44)prover/bin/server/src/execute.rs (1)
new(35-41)
prover/bin/shadow-prove/src/main.rs (1)
prover/bin/shadow-prove/src/execute.rs (1)
exceut_batch(18-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: build
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (10)
prover/bin/server/src/server.rs (1)
1-5: Newexecuteimports are consistent with downstream usageImporting
ExecuteRequestandExecutorfromcrate::executealigns with their use inEXECUTE_QUEUE,start_executor, andadd_execute_req. No issues here.prover/bin/shadow-prove/src/shadow_rollup.rs (2)
222-260: Good implementation of batch_blocks_inspect.The method correctly calculates the block range and transaction count for a batch. The logic properly:
- Retrieves block numbers from both the previous and current batch
- Iterates through the inclusive range (prev_bn + 1 to current_bn)
- Updates metrics
- Returns structured data
Note: The transaction counting issue flagged separately should be addressed.
262-276: Good defensive programming in is_prove_success.The method correctly handles the RPC call error by returning
Noneinstead of panicking, allowing the caller to decide how to handle missing data. The logging provides useful context for debugging.prover/bin/shadow-prove/src/main.rs (1)
33-35: Configuration constants are properly initialized with appropriate defaults.The constants
SHADOW_PROVING_MAX_BLOCK(default 600),SHADOW_PROVING_MAX_TXN(default 200), andSHADOW_PROVING_PROVER_RPCare defined inlib.rsand initialized from environment variables usingLazy<T>. ThePROVER_RPCintentionally requires explicit configuration and will panic if not set, which is appropriate for a critical runtime parameter. The logging at lines 33-35 correctly dereferences these values.prover/crates/core/src/executor/mod.rs (4)
458-469: The review comment claims an inconsistency between the struct layout comment and storage packing, but the code is correct. The struct layout comment at line 439 correctly describes the field order: "isActive (bool, 1 byte) + decimals (uint8, 1 byte)". The code implementation at lines 462-463 correctly placesisActiveat byte 31 anddecimalsat byte 30, which is the expected behavior for Solidity packed storage where the first struct field is placed at the rightmost (least significant) byte position. The implementation comment at lines 461-462 further clarifies this relationship. No changes are needed.Likely an incorrect or invalid review comment.
129-130: No action needed — fee_token_id and fee_limit fields are properly supported in the custom revm fork.The TxEnv struct in the morph-l2/revm (release/v42) fork already includes fee_token_id and fee_limit fields, as confirmed by their use in both the main executor code and the test suite. The fields are successfully assigned at lines 129-130 and also appear in the test infrastructure at lines 355-356.
523-528: The method ID is correct.The method ID
[0x70, 0xa0, 0x82, 0x31]correctly corresponds tobalanceOf(address)in the ERC20 standard. This matches the function selector derived fromkeccak256("balanceOf(address)")=0x70a08231.
330-339: Verify storage slot constants match the L2TokenRegistry contract layout.The constants TOKEN_REGISTRY_SLOT (151) and PRICE_RATIO_SLOT (153) are used in keccak256-based mapping key calculation and must remain synchronized with the actual L2TokenRegistry contract storage layout. If the contract is ever upgraded, these values will need updating; otherwise storage key calculations will silently produce incorrect results. The L2_TOKEN_REGISTRY_ADDRESS (0x5300000000000000000000000000000000000021) is correctly configured as a predeploy system address used throughout the codebase.
prover/crates/primitives/src/types/tx.rs (2)
293-305: AltFee wiring inTransaction for TypedTransactionlooks consistentThe new
AltFeematch arms forchain_id,nonce,gas_limit,gas_price,max_fee_per_gas,max_priority_fee_per_gas,max_fee_per_blob_gas,priority_fee_or_price,to,value,input,ty,access_list, andblob_versioned_hashesall consistently delegate totx.tx(), matching the behavior ofEnvelopedandL1Msg.Assuming
TxAltFee’sTransactionimpl exposes the same semantics as the other transaction types, this integration looks correct and complete.Please double‑check against the
alloy::consensus::Transactionand AltFee specs that:
TxAltFee::gas_price()/max_fee_per_gas/max_priority_fee_per_gasbehave as expected for AltFee,- and that
TxAltFee::ty()indeed returns0x7f, so downstream consumers see the correct type.Also applies to: 313-337, 345-401
597-613: AltFee convenience accessors onTypedTransactionlook goodThe new
fee_token_idandfee_limithelpers correctly scope these fields to theAltFeevariant and returnNonefor other transaction types. This keeps the public API clean and avoids overloading the meaning of0/0for non‑AltFee transactions.
prover/bin/shadow-prove/src/main.rs
Outdated
| use log::Record; | ||
| use prometheus::{Encoder, TextEncoder}; | ||
| use shadow_proving::{ | ||
| execute::exceut_batch, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Fix typo in function name.
The function is named exceut_batch (missing 'e') instead of execute_batch. This typo appears in the import statement and is used throughout the code (line 92). The function is also misspelled in prover/bin/shadow-prove/src/execute.rs (line 17).
Apply this diff to fix the typo:
- execute::exceut_batch,
+ execute::execute_batch,Additionally, ensure the function definition in prover/bin/shadow-prove/src/execute.rs is also corrected:
-pub async fn exceut_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> {
+pub async fn execute_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> {And update the call site at line 92:
- match exceut_batch(&batch_info).await {
+ match execute_batch(&batch_info).await {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| execute::exceut_batch, | |
| execute::execute_batch, |
🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/main.rs around line 16 the imported function is
misspelled as `exceut_batch`; update the import to `execute::execute_batch`.
Also fix the function name in prover/bin/shadow-prove/src/execute.rs (around
line 17) from `exceut_batch` to `execute_batch`, and update the call site in
prover/bin/shadow-prove/src/main.rs at line 92 to call `execute_batch(...)`
instead of `exceut_batch(...)`; ensure all references, docs, and tests in this
crate use the corrected `execute_batch` identifier to keep names consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
contracts/src/deploy-config/qanetl1.ts (1)
17-17: programVkey updated to new verifier key – looks consistentThe new
programVkeyis well‑formed (0x + 64 hex chars) and matches the value used in other deploy‑configs in this PR, so this config change looks correct and consistent with the rest of the system.Re the Gitleaks “generic-api-key” warning here: this appears to be a public verifier/program key, not a credential. You may want to add this pattern/path to your Gitleaks allowlist to avoid future false positives.
contracts/src/deploy-config/sepolia.ts (1)
21-21: Sepolia programVkey rotated to new verifier key – LGTMThe new
programVkeymatches the other deploy‑configs and is correctly formatted, so this rotation looks safe and consistent.The Gitleaks “generic-api-key” finding here is almost certainly a false positive for this public verifier key; consider allowlisting this field or file to cut down on noisy alerts.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
contracts/src/deploy-config/holesky.ts(1 hunks)contracts/src/deploy-config/hoodi.ts(1 hunks)contracts/src/deploy-config/l1.ts(1 hunks)contracts/src/deploy-config/qanetl1.ts(1 hunks)contracts/src/deploy-config/sepolia.ts(1 hunks)contracts/src/deploy-config/testnetl1.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- contracts/src/deploy-config/hoodi.ts
- contracts/src/deploy-config/holesky.ts
- contracts/src/deploy-config/l1.ts
🧰 Additional context used
🪛 Gitleaks (8.30.0)
contracts/src/deploy-config/qanetl1.ts
[high] 17-17: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
contracts/src/deploy-config/sepolia.ts
[high] 21-21: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build
- GitHub Check: test
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
contracts/src/deploy-config/testnetl1.ts (1)
16-16: Aligns testnet programVkey with new canonical valueThe updated
programVkeymatches the new value used in other environments and keeps the config shape unchanged, so this looks good and should keep verifier behavior consistent across networks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
prover/bin/shadow-prove/src/main.rs (1)
81-110: Reconsider runningexecute_batchbefore limit checksRight now the flow is:
get_committed_batch- Always call
execute_batch(&batch_info)- Then check
SHADOW_PROVING_MAX_BLOCK/SHADOW_PROVING_MAX_TXN- Only if within limits, call
sync_batch+proveThis means you may execute batches (including potentially heavy trace fetching and EVM execution on the prover side) that you immediately skip for proving because they exceed your configured limits.
If the intent is to avoid any heavy work (execute + prove) for oversized batches, consider moving the two limit checks ahead of
execute_batch, e.g.:- // Execute batch - match execute_batch(&batch_info).await { - Ok(_) => (), - Err(e) => { - log::error!("execute_batch error: {:?}", e); - continue - } - } - - // Sync & Prove - if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { + // Skip oversized batches before any execution/proving work. + if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { log::warn!("Too many blocks in the latest batch to shadow prove"); continue; } if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { log::warn!("Too many txn in the latest batch to shadow prove"); continue; } + + // Execute batch + match execute_batch(&batch_info).await { + Ok(_) => (), + Err(e) => { + log::error!("execute_batch error: {:?}", e); + continue + } + } + + // Sync & Prove let result = match batch_syncer.sync_batch(batch_info, batch_header).await {If, on the other hand, you intentionally want to execute large batches but only skip proving them, leaving the current order is fine—but it would be good to document that behavior explicitly so future readers don’t assume the limits gate all heavy work.
prover/bin/shadow-prove/src/execute.rs (1)
20-35: Fix execute endpoint, avoidunwrap, and make RPC configurableA few issues in
execute_batch:
Endpoint mismatch
The server exposes an/execute_batchroute, but this client still calls"/prove_batch". That means the “execute-only” path never hits the new endpoint and will likely run the full proof flow instead of just execution.Hard‑coded RPC in the request body
rpc: "http://localhost:8545".to_owned()makes this binary effectively dev-only. Given you already configure RPCs via env inmain.rs, it would be better to either:
- Plumb the relevant RPC string into
execute_batchand set it here, or- Read it from an env var using your existing helpers (e.g.,
read_env_var/read_parse_env).This keeps the execute path consistent across environments.
Avoid
unwrapon serialization
serde_json::to_string(&request).unwrap()can panic, and there’s no strong reason not to surface a proper error viaanyhow.You can address (1) and (3) with a local change like:
pub async fn execute_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> { // Request the proverServer to prove. - let request = ExecuteRequest { - batch_index: batch.batch_index, - start_block: batch.start_block, - end_block: batch.end_block, - rpc: "http://localhost:8545".to_owned(), - }; - let rt = tokio::task::spawn_blocking(move || { - util::call_prover(serde_json::to_string(&request).unwrap(), "/prove_batch") - }) - .await; + let request = ExecuteRequest { + batch_index: batch.batch_index, + start_block: batch.start_block, + end_block: batch.end_block, + // TODO: inject the actual RPC endpoint from configuration instead of hard-coding. + rpc: "http://localhost:8545".to_owned(), + }; + + let param = serde_json::to_string(&request) + .map_err(|e| anyhow!("failed to serialize ExecuteRequest: {e:?}"))?; + + let rt = tokio::task::spawn_blocking(move || { + util::call_prover(param, "/execute_batch") + }) + .await;For (2), I’d recommend wiring the concrete RPC source from your existing config/env rather than keeping the literal in this function.
🧹 Nitpick comments (1)
prover/bin/shadow-prove/src/execute.rs (1)
12-16: Consider removing or actively usingExecuteResult
ExecuteResultis defined but never used in this module. If the client is not deserializing the prover response yet, you can either:
- Remove this type for now, or
- Start parsing the
Stringreturned bycall_proverintoExecuteResultand act onerror_code/error_msg.Keeping unused types around makes it harder to see what is actually part of the wire protocol versus future plans.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
prover/bin/shadow-prove/src/execute.rs(1 hunks)prover/bin/shadow-prove/src/main.rs(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
prover/bin/shadow-prove/src/main.rs (2)
prover/bin/shadow-prove/src/execute.rs (1)
execute_batch(18-36)prover/bin/shadow-prove/src/util.rs (2)
read_env_var(37-41)read_parse_env(43-50)
prover/bin/shadow-prove/src/execute.rs (2)
prover/bin/server/src/execute.rs (1)
execute_batch(75-102)prover/bin/shadow-prove/src/util.rs (1)
call_prover(5-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: build
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (rust)
🔇 Additional comments (1)
prover/bin/shadow-prove/src/main.rs (1)
33-35: Startup logging of SHADOW_PROVING_ config looks good*Logging
SHADOW_PROVING_MAX_BLOCK,SHADOW_PROVING_MAX_TXN, andSHADOW_PROVING_PROVER_RPCat startup is useful for debugging misconfigurations. This seems reasonable as long as the RPC URL doesn’t contain credentials.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
prover/bin/server/src/server.rs (2)
58-62: Consider symmetric task spawning for executor and prover.While
start_executor()now spawns a background task and returns immediately,start_prover()still blocks the main thread. This asymmetry is inconsistent and could confuse maintainers. Consider treating both services uniformly.Apply this diff to spawn both symmetrically:
- // Step3. start executor - start_executor().await; - - // Step4. start prover - start_prover().await; + // Step3. start executor and prover concurrently + tokio::join!( + tokio::spawn(async { + let mut executor = Executor::new(Arc::clone(&EXECUTE_QUEUE)).unwrap(); + executor.execute_for_queue().await; + }), + tokio::spawn(async { + let mut prover = Prover::new(Arc::clone(&PROVE_QUEUE)).unwrap(); + prover.prove_for_queue().await; + }) + );Alternatively, keep the separate functions but have
start_prover()also spawn a task likestart_executor().
182-228: Consider extracting shared validation logic to reduce duplication.The validation logic in
add_execute_req(lines 184-214) is nearly identical toadd_pending_req(lines 128-162): empty checks, deserialization, block range validation, MAX_PROVE_BLOCKS enforcement, and RPC URL validation. This duplication increases maintenance burden and the risk of inconsistent updates.Consider extracting common validation into a generic helper:
async fn validate_batch_request<T>( param: &str, ) -> Result<T, String> where T: serde::de::DeserializeOwned + BatchRequest, { if param.is_empty() { return Err("request is empty".to_string()); } let request: T = serde_json::from_str(param) .map_err(|_| "deserialize request failed".to_string())?; if request.end_block() < request.start_block() { return Err("blocks index error".to_string()); } let blocks_len = request.end_block() - request.start_block() + 1; if blocks_len as usize > *MAX_PROVE_BLOCKS { return Err(format!( "blocks len = {:?} exceeds MAX_PROVE_BLOCKS = {:?}", blocks_len, MAX_PROVE_BLOCKS )); } if !request.rpc().starts_with("http://") && !request.rpc().starts_with("https://") { return Err("invalid rpc url".to_string()); } Ok(request) }Define a trait
BatchRequestwith methodsstart_block(),end_block(), andrpc(), then implement it for bothProveRequestandExecuteRequest.Note:
add_execute_reqdoes not callcheck_batch_status()(present inadd_pending_reqat line 164). Verify this omission is intentional for execution requests.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
prover/bin/server/src/server.rs(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
prover/bin/server/src/server.rs (2)
prover/bin/server/src/queue.rs (1)
new(38-44)prover/bin/server/src/execute.rs (1)
new(35-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: test
- GitHub Check: Analyze (rust)
🔇 Additional comments (3)
prover/bin/server/src/server.rs (3)
2-2: LGTM!The import additions for
ExecuteRequestandExecutorare necessary for the new execution functionality and follow the existing pattern.
42-43: LGTM!The
EXECUTE_QUEUEstatic follows the same pattern asPROVE_QUEUEand appropriately usesArc<Mutex<>>for thread-safe shared access.
69-69: LGTM!The
/execute_batchroute follows the established pattern and integrates cleanly into the prover management service.
| async fn start_executor() { | ||
| tokio::spawn(async { | ||
| let mut executor = Executor::new(Arc::clone(&EXECUTE_QUEUE)).unwrap(); | ||
| executor.execute_for_queue().await; | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for Executor initialization failure.
The unwrap() on Line 103 will panic if Executor::new() fails (e.g., invalid PROVER_L2_RPC URL). Since this runs in a spawned task, the panic terminates the executor silently without propagating to the main thread, leaving the service in a degraded state.
Apply this diff to log the error and prevent silent failure:
async fn start_executor() {
tokio::spawn(async {
- let mut executor = Executor::new(Arc::clone(&EXECUTE_QUEUE)).unwrap();
- executor.execute_for_queue().await;
+ match Executor::new(Arc::clone(&EXECUTE_QUEUE)) {
+ Ok(mut executor) => {
+ executor.execute_for_queue().await;
+ }
+ Err(e) => {
+ log::error!("Failed to initialize executor: {:#?}", e);
+ panic!("Executor initialization failed: {:#?}", e);
+ }
+ }
});
}Alternatively, consider checking the initialization before spawning and returning a Result so the caller can decide how to handle failure.
🤖 Prompt for AI Agents
In prover/bin/server/src/server.rs around lines 101 to 106, the code calls
Executor::new(...).unwrap() which will panic inside the spawned task and
silently terminate the executor on initialization failure; replace the unwrap
with proper error handling: attempt to construct the Executor with match or if
let Err(e) and log the error (including context and the error e) and avoid
spawning the task when initialization fails; alternatively change start_executor
to return a Result so the caller can handle or propagate initialization errors
before spawning the async task. Ensure no panic occurs and failures are logged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
prover/bin/shadow-prove/src/main.rs (1)
79-125:latest_processed_batchupdate and limit checks can cause missed re‑tries and unnecessary execute calls.Two related issues in this loop:
Batches may never be re‑proven after a failure.
- You update
latest_processed_batchimmediately after a successfulexecute_batch(Line 105), beforesync_batch + provesucceed.- If
sync_batchorshadow_prover.provefails, the next iteration still seesbatch_info.batch_index <= latest_processed_batchand skips the batch entirely, so it is never retried.Block/txn limit checks still run after execution.
execute_batchis invoked before checkingSHADOW_PROVING_MAX_BLOCKandSHADOW_PROVING_MAX_TXN.- Oversized batches will still be sent to
/execute_batch, even though they’re immediately skipped for proving. This mirrors the concern raised in the earlier review.A safer control flow would be:
- First validate
{end_block - start_block + 1, total_txn}against the limits andcontinueearly when they are exceeded.- Only after a full successful pipeline (execute if
SHADOW_EXECUTE, thensync_batch, thenprove) shouldlatest_processed_batchbe advanced.Sketch:
- // Check if batch has already been processed - if batch_info.batch_index <= latest_processed_batch { - log::info!("Batch {} has already been processed, skipping", batch_info.batch_index); - continue; - } - - if *SHADOW_EXECUTE { - log::info!(">Start shadow execute batch: {:#?}", batch_info.batch_index); - // Execute batch - match execute_batch(&batch_info).await { - Ok(_) => { - // Update the latest processed batch index - latest_processed_batch = batch_info.batch_index; - } - Err(e) => { - log::error!("execute_batch error: {:?}", e); - continue - } - } - } - - // Sync & Prove - if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { + // Check if batch has already been fully processed + if batch_info.batch_index <= latest_processed_batch { + log::info!("Batch {} has already been processed, skipping", batch_info.batch_index); + continue; + } + + // Enforce block/txn limits before doing any heavy work + if batch_info.end_block - batch_info.start_block + 1 > *SHADOW_PROVING_MAX_BLOCK { log::warn!("Too many blocks in the latest batch to shadow prove"); continue; } - if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { + if batch_info.total_txn > *SHADOW_PROVING_MAX_TXN { log::warn!("Too many txn in the latest batch to shadow prove"); continue; } - let result = match batch_syncer.sync_batch(batch_info, batch_header).await { + // Execute first (optional), then sync & prove + if *SHADOW_EXECUTE { + log::info!(">Start shadow execute batch: {:#?}", batch_info.batch_index); + if let Err(e) = execute_batch(&batch_info).await { + log::error!("execute_batch error: {:?}", e); + continue; + } + } + + let result = match batch_syncer.sync_batch(batch_info, batch_header).await { Ok(Some(batch)) => shadow_prover.prove(batch).await, Ok(None) => Ok(()), Err(e) => Err(e), }; + + if result.is_ok() { + latest_processed_batch = batch_info.batch_index; + }This keeps retries possible when any stage fails and avoids unnecessary execute calls for batches that are going to be skipped on size limits.
prover/bin/shadow-prove/src/shadow_rollup.rs (1)
247-265: RPC failures silently treated as zero transactions (risk undercounting).This loop:
for i in start_block..end_block + 1 { total_tx_count += self .l2_provider .get_block_transaction_count_by_number(i.into()) .await .unwrap_or_default() .unwrap_or_default(); }treats any RPC error or
Noneresult as0via twounwrap_or_defaultcalls. That can undercounttotal_tx_count, letting batches that exceedSHADOW_PROVING_MAX_TXNslip through the limit checks inmain.rs.Prefer handling these cases explicitly (and at least logging) instead of silently defaulting:
let mut total_tx_count: u64 = 0; for i in start_block..end_block + 1 { - total_tx_count += self - .l2_provider - .get_block_transaction_count_by_number(i.into()) - .await - .unwrap_or_default() - .unwrap_or_default(); + match self + .l2_provider + .get_block_transaction_count_by_number(i.into()) + .await + { + Ok(Some(count)) => total_tx_count += count, + Ok(None) => { + log::warn!("No tx count for block {}", i); + } + Err(e) => { + log::warn!("Failed to get tx count for block {}: {:?}", i, e); + // Optionally: return None here to force the caller to skip this batch. + } + } }This keeps the metrics honest and makes it easier to detect RPC issues.
prover/bin/shadow-prove/src/execute.rs (1)
18-35: Avoid hardcoding L2 RPC URL in ExecuteRequest.
rpcis currently fixed to"http://localhost:8545", which makesexecute_batchnon-configurable and diverges from the rest of the binary that reads RPCs from env (e.g.,SHADOW_PROVING_L2_RPCinmain.rs). This will break in non-local setups or when the L2 endpoint changes.Also, the comment still refers to “prove” even though this path is for execute.
Consider something along these lines for consistency:
-use crate::{util, BatchInfo}; +use crate::{util, BatchInfo}; +use crate::util::read_parse_env; // or a dedicated Lazy<String> similar to SHADOW_PROVING_PROVER_RPC pub async fn execute_batch(batch: &BatchInfo) -> Result<(), anyhow::Error> { - // Request the proverServer to prove. + // Request the prover server to execute the batch (no proof). - let request = ExecuteRequest { + let request = ExecuteRequest { batch_index: batch.batch_index, start_block: batch.start_block, end_block: batch.end_block, - rpc: "http://localhost:8545".to_owned(), + rpc: read_parse_env("SHADOW_PROVING_L2_RPC"), };(If you prefer not to call
read_parse_envhere, an env-backedLazy<String>for the L2 RPC, similar toSHADOW_PROVING_PROVER_RPC, would also work.)
🧹 Nitpick comments (3)
prover/bin/shadow-prove/src/lib.rs (1)
20-31: Tighten SHADOW_PROVING_PROVER_RPC error messaging for clarity.The env var name and panic message are slightly inconsistent (
"SHADOW_PROVING_PROVER_RPC"vs"PROVER_RPC"in the expect message). Consider aligning them to make misconfiguration easier to diagnose and avoid confusion when this panics at startup.-pub static SHADOW_PROVING_PROVER_RPC: Lazy<String> = - Lazy::new(|| var("SHADOW_PROVING_PROVER_RPC").expect("Cannot detect PROVER_RPC env var")); +pub static SHADOW_PROVING_PROVER_RPC: Lazy<String> = Lazy::new(|| { + var("SHADOW_PROVING_PROVER_RPC") + .expect("Cannot detect SHADOW_PROVING_PROVER_RPC env var") +});prover/bin/shadow-prove/src/main.rs (1)
33-35: Be cautious logging full prover RPC URL (possible secret leakage).
SHADOW_PROVING_PROVER_RPCmay embed credentials or API keys in the URL (e.g., Infura-style endpoints). Logging it verbatim can leak secrets into logs.If that’s a concern in your deployment, consider either omitting this log or redacting sensitive parts before printing.
prover/bin/shadow-prove/src/shadow_rollup.rs (1)
57-58: Error inis_prove_successcurrently causes batches to be treated as already proved.In
sync_batch:if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) { log::info!("batch of {:?} already prove state successful", batch_info.batch_index); return Ok(None); };and in
is_prove_success:Err(e) => { log::info!(/* ... */); return None; }If the
isProveSuccesscall fails (RPC error, contract issue, etc.),is_prove_successreturnsNone, andunwrap_or(true)interprets that as “already proved” and skips syncing. That makes transient errors indistinguishable from “already proved” and can cause batches to be skipped indefinitely.A more conservative handling would be to treat errors as “not proved yet” or to bubble them up:
- if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) { + match self.is_prove_success(batch_info.batch_index).await { + Some(true) => { + log::info!( + "batch of {:?} already prove state successful", + batch_info.batch_index + ); + return Ok(None); + } + Some(false) => {} + None => { + // Either log and continue trying, or return Err to let the caller decide. + log::warn!( + "Failed to query prove state for batch {:?}, will retry later", + batch_info.batch_index + ); + return Ok(None); + } + }and optionally adjust
is_prove_successto log atwarn/errorlevel on failures.Also applies to: 269-283
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
prover/bin/shadow-prove/src/execute.rs(1 hunks)prover/bin/shadow-prove/src/lib.rs(2 hunks)prover/bin/shadow-prove/src/main.rs(3 hunks)prover/bin/shadow-prove/src/shadow_rollup.rs(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
prover/bin/shadow-prove/src/lib.rs (2)
prover/bin/shadow-prove/src/util.rs (1)
read_env_var(37-41)prover/bin/server/src/lib.rs (1)
read_env_var(24-28)
prover/bin/shadow-prove/src/execute.rs (2)
prover/bin/server/src/execute.rs (1)
execute_batch(75-102)prover/bin/shadow-prove/src/util.rs (1)
call_prover(5-35)
prover/bin/shadow-prove/src/main.rs (2)
prover/bin/shadow-prove/src/execute.rs (1)
execute_batch(18-36)prover/bin/server/src/execute.rs (1)
execute_batch(75-102)
prover/bin/shadow-prove/src/shadow_rollup.rs (4)
oracle/oracle/batch.go (1)
BatchInfo(32-39)node/derivation/batch_info.go (1)
BatchInfo(48-63)bindings/hardhat/types.go (1)
Log(49-58)tx-submitter/iface/client.go (1)
Client(14-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: build
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
| let (blocks, total_txn_count) = | ||
| match self.batch_blocks_inspect(prev_tx_hash, batch_index_hash.1).await { | ||
| Some(block_txn) => block_txn, | ||
| None => return Err(String::from("batch_blocks_inspect none")), | ||
| }; | ||
|
|
||
| if blocks.0 >= blocks.1 { | ||
| return Err(String::from("blocks is empty")); | ||
| } | ||
| }; | ||
|
|
||
| if batch_index == 0 { | ||
| return Err(String::from("batch_index is 0")); | ||
| } | ||
| let (blocks, total_txn_count) = | ||
| match batch_blocks_inspect(l1_rollup, l2_provider, batch_index).await { | ||
| Some(block_txn) => block_txn, | ||
| None => return Err(String::from("batch_blocks_inspect none")), | ||
| let batch_info: BatchInfo = BatchInfo { | ||
| batch_index: batch_index_hash.0, | ||
| start_block: blocks.0, | ||
| end_block: blocks.1, | ||
| total_txn: total_txn_count, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Block range validation still rejects valid single‑block batches.
Here:
let (blocks, total_txn_count) = /* ... */;
if blocks.0 >= blocks.1 {
return Err(String::from("blocks is empty"));
}Given blocks is (start_block, end_block) and batch_blocks_inspect iterates for i in start_block..end_block + 1, a range with start_block == end_block clearly represents a single block and should be valid. With >=, single‑block ranges are treated as “empty” and cause get_committed_batch to return an error.
This is effectively the same inversion that was flagged before; the check should reject only ranges where start_block > end_block:
- if blocks.0 >= blocks.1 {
+ if blocks.0 > blocks.1 {
return Err(String::from("blocks is empty"));
}🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/shadow_rollup.rs around lines 198 to 213 the
current validation uses `if blocks.0 >= blocks.1 { return Err("blocks is empty")
}`, which incorrectly rejects valid single-block batches; change the condition
to `if blocks.0 > blocks.1` so only ranges where start_block is greater than
end_block are rejected, and update the error message to something like "invalid
block range: start_block > end_block" (or similar) to reflect the actual failure
case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
prover/bin/shadow-prove/src/shadow_rollup.rs (2)
204-206: Fix the block range validation to allow single-block batches.The condition
blocks.0 >= blocks.1incorrectly rejects valid single-block batches wherestart_block == end_block. Given that line 248 iteratesfor i in start_block..end_block + 1, a range withstart_block == end_blockrepresents a single block and should be valid.Apply this diff:
- if blocks.0 >= blocks.1 { - return Err(String::from("blocks is empty")); + if blocks.0 > blocks.1 { + return Err(String::from("invalid block range: start_block > end_block")); }
247-255: Fix silent error handling for transaction count retrieval.Using
.unwrap_or_default().unwrap_or_default()silently treats RPC failures as 0 transactions, which could result intotal_tx_countbeing undercounted. This may allow batches exceedingSHADOW_PROVING_MAX_TXNto incorrectly pass validation.Apply this diff to handle errors explicitly:
let mut total_tx_count: u64 = 0; - for i in start_block..end_block + 1 { - total_tx_count += self + for i in start_block..end_block + 1 { + match self .l2_provider .get_block_transaction_count_by_number(i.into()) .await - .unwrap_or_default() - .unwrap_or_default(); + { + Ok(Some(count)) => total_tx_count += count, + Ok(None) => { + log::error!("No tx count for block {}", i); + return None; + } + Err(e) => { + log::error!("Failed to get tx count for block {}: {:?}", i, e); + return None; + } + } }
🧹 Nitpick comments (1)
prover/bin/shadow-prove/src/shadow_rollup.rs (1)
82-109: Consider validating batch header length.The code uses
.unwrap_or_default()extensively when slicingbatch_header. Ifbatch_headeris malformed or too short, fields will silently default to zero values, potentially causing downstream issues.Add an early length check:
+ if batch_header.len() < 249 { + log::error!("batch_header too short: {} bytes", batch_header.len()); + return Err(anyhow::anyhow!("Invalid batch header length")); + } + let batch_store = ShadowRollup::BatchStore {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
prover/bin/shadow-prove/src/shadow_rollup.rs(6 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build
- GitHub Check: test
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
prover/bin/shadow-prove/src/shadow_rollup.rs (4)
50-54: LGTM! Good refactoring.Making
sync_batchaccept explicitbatch_infoandbatch_headerparameters improves testability and makes the function's dependencies clear.
269-283: LGTM!The
is_prove_successfunction correctly returnsOption<bool>and logs errors when the RPC call fails. The error handling issue is in the caller at line 57, not here.
286-320: LGTM!The
batch_input_inspectfunction has proper error handling, logging, and returnsNoneon failures. The signature change to return(Bytes, u64)aligns with the new batch inspection workflow.
364-366: LGTM!Test updates correctly use the new API flow: first fetching the committed batch via
get_committed_batch(), then passing the batch info explicitly tosync_batch().
| if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) { | ||
| log::info!("batch of {:?} already prove state successful", batch_info.batch_index); | ||
| return Ok(None); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the fallback logic for RPC failures.
Using .unwrap_or(true) means that if the RPC call to check prove state fails, the function assumes the batch is already proven and skips it. This is backwards—RPC failures should not prevent valid batches from being proven.
Apply this diff to treat RPC failures as "not proven" so the batch is processed:
- if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) {
+ if self.is_prove_success(batch_info.batch_index).await.unwrap_or(false) {
log::info!("batch of {:?} already prove state successful", batch_info.batch_index);
return Ok(None);
};Alternatively, handle the None case explicitly by returning an error:
- if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) {
- log::info!("batch of {:?} already prove state successful", batch_info.batch_index);
- return Ok(None);
- };
+ match self.is_prove_success(batch_info.batch_index).await {
+ Some(true) => {
+ log::info!("batch of {:?} already prove state successful", batch_info.batch_index);
+ return Ok(None);
+ }
+ Some(false) => {
+ // Continue processing
+ }
+ None => {
+ log::error!("Failed to query prove state for batch {:?}", batch_info.batch_index);
+ return Err(anyhow::anyhow!("Failed to query prove state"));
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if self.is_prove_success(batch_info.batch_index).await.unwrap_or(true) { | |
| log::info!("batch of {:?} already prove state successful", batch_info.batch_index); | |
| return Ok(None); | |
| }; | |
| if self.is_prove_success(batch_info.batch_index).await.unwrap_or(false) { | |
| log::info!("batch of {:?} already prove state successful", batch_info.batch_index); | |
| return Ok(None); | |
| }; |
🤖 Prompt for AI Agents
In prover/bin/shadow-prove/src/shadow_rollup.rs around lines 57 to 60, the RPC
failure fallback uses .unwrap_or(true) which incorrectly treats RPC errors as
"already proven" and skips processing; change the logic to treat RPC failures as
"not proven" by replacing .unwrap_or(true) with .unwrap_or(false) so the batch
will be processed when the RPC check fails, or alternatively propagate the error
by mapping the Option/Result to an Err and returning it instead of silently
defaulting.
Summary by CodeRabbit
New Features
Tests / Fixtures
Chores
✏️ Tip: You can customize this high-level summary in your review settings.