Skip to content

feat: merge-train/spartan#22980

Open
AztecBot wants to merge 25 commits intonextfrom
merge-train/spartan
Open

feat: merge-train/spartan#22980
AztecBot wants to merge 25 commits intonextfrom
merge-train/spartan

Conversation

@AztecBot
Copy link
Copy Markdown
Collaborator

@AztecBot AztecBot commented May 6, 2026

BEGIN_COMMIT_OVERRIDE
fix(test): warp L1 forward when proposer scan hits EpochNotStable (#22967)
test(e2e): fail epochs tests on proposer-rollup-check-failed (#22965)
fix: grafana switch to aztec_status="proposed" (#22978)
chore: update benchmark scraper (#22984)
test(e2e): migrate simple epoch tests to pipelining (#22973)
chore: remove top-level yarn.lock (#22987)
refactor(archiver)!: unify L2BlockSource checkpoint lookups via query objects (#22933)
fix(sequencer): bounded sweep instead of event scan for governance proposal check (#22989)
fix(docs): allow webapp-tutorial yarn install to populate empty lockfile in CI (#23000)
test(e2e): enable pipelining in l1-reorgs and mbps redistribution tests (#23009)
fix(archiver): restore pending block height metric under pipelining (#22994)
chore(p2p): remove skipped validation result option (#23034)
refactor(p2p)!: remove slow tx collection flow (#22878)
chore(spartan): add next-net-clone environment config (#22995)
END_COMMIT_OVERRIDE

spalladino added 2 commits May 6, 2026 08:54
…2967)

## Motivation

The `e2e_epochs/epochs_missed_l1_publish` test fails intermittently when
its proposer-discovery scan looks too far into the future. The L1 rollup
contract reverts with `ValidatorSelection__EpochNotStable` for any epoch
whose randao sample timestamp is still ahead of `block.timestamp`, and
the test was scanning up to 60 slots (~15 epochs at the test's epoch
duration) ahead, well past the queryable horizon.

## Approach

Wrap the proposer scan in a retry loop that catches `EpochNotStable`,
warps L1 forward by one epoch, and re-queries the same candidate. After
each warp the scan also re-anchors the candidate to keep the +4 slot
margin from the new "now", so subsequent steps (the warp to `slotZero`
and sequencer start-up) still have headroom.

## Changes

- **end-to-end (tests)**: Replace the bounded `for` loop in
`epochs_missed_l1_publish.test.ts` with a try/catch retry that warps L1
on `EpochNotStable`.
These sequencer errors were ignored in some tests. Removing that since
this error should not happen. If it does, it's cause for analysis.
@socket-security
Copy link
Copy Markdown

socket-security Bot commented May 6, 2026

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Addednpm/​@​types/​node@​20.19.391001008194100

View full report

spalladino and others added 7 commits May 6, 2026 16:12
… objects (#22933)

## Motivation

Clean up the checkpoint side of `L2BlockSource`. PR #22809 already
collapsed the block-side API into 4 query-shaped methods over 2 return
types; the checkpoint surface was left with the pre-refactor sprawl (9
narrow methods over 4 return shapes, parallel by-number / by-range /
by-epoch entrypoints, and a wire-level alias that conflated proposed and
confirmed checkpoints). This change applies the same simplification.

Fixes A-979

## Approach

`L2BlockSource` checkpoint methods reduce to 4 query-shaped readers
(`getCheckpoint`, `getCheckpoints`, `getCheckpointData`,
`getCheckpointsData`) over 2 return shapes (`PublishedCheckpoint`,
`CheckpointData`), plus a polymorphic
`getProposedCheckpointData(query?)` for the proposed-only path. Three
new query types live next to `BlockQuery`/`BlocksQuery`. On-disk format
and `BlockStore` primitives are unchanged — the simplification is at the
API boundary. The public RPC's `getCheckpoint` keeps the same wire
signature but gains a confirmed→proposed fallback (for
`{number}`/`{slot}`/`'proposed'` lookups) and `BadRequestError` guards
for incompatible `include*` flags.

## API surface change

### Methods removed from `L2BlockSource`

`getCheckpoints(from, limit)`, `getCheckpointData(n)`,
`getCheckpointDataRange(from, limit)`, `getCheckpointsForEpoch(epoch)`,
`getCheckpointsDataForEpoch(epoch)`, `getCheckpointNumberBySlot(slot)`,
`getLastCheckpoint()`, `getLastProposedCheckpoint()`. Dead methods on
`data_source_base` also removed: `getCheckpointHeader`,
`getLastBlockNumberInCheckpoint`, `getSynchedCheckpointNumber`.

### Methods added to `L2BlockSource`

```ts
getCheckpoint(query: CheckpointQuery): Promise<PublishedCheckpoint | undefined>
getCheckpoints(query: CheckpointsQuery): Promise<PublishedCheckpoint[]>
getCheckpointData(query: CheckpointQuery): Promise<CheckpointData | undefined>
getCheckpointsData(query: CheckpointsQuery): Promise<CheckpointData[]>
getProposedCheckpointData(query?: ProposedCheckpointQuery): Promise<ProposedCheckpointData | undefined>

type CheckpointQuery         = { number } | { slot } | { tag: 'checkpointed' | 'proven' | 'finalized' }
type CheckpointsQuery        = { from, limit } | { epoch }
type ProposedCheckpointQuery = { number } | { slot } | { tag: 'proposed' }
```

### Public RPC (`AztecNode`) wire-level changes

- `getCheckpointsDataForEpoch(epoch)` removed;
`getCheckpointsData(query: CheckpointsQuery)` added (range or epoch).
- `'latest'` removed from `CheckpointParameter`.
- `'proposed'` semantics changed: previously aliased to "latest
L1-confirmed checkpoint" (a documented foot-gun); now
`getCheckpoint('proposed')` strictly targets the proposed-checkpoint
store, and `getCheckpointNumber('proposed')` returns the proposed-tip
number with confirmed fallback.
- `getCheckpoint({ number }) / ({ slot })` now check confirmed first
then fall back to proposed; tag-based lookups (`'checkpointed'` /
`'proven'` / `'finalized'`) do not fall back.
- `getCheckpoint('proposed', { includeL1PublishInfo: true |
includeAttestations: true })` and the same flags on a by-number/by-slot
lookup that resolves to a proposed entry now throw `BadRequestError`
(proposed checkpoints have no L1 publish info or attestations).

### Types kept

`CheckpointData`, `CommonCheckpointData` (structural base of
`CheckpointData` / `ProposedCheckpointInput`), `ProposedCheckpointData`,
`ProposedCheckpointInput`, `PublishedCheckpoint`, `Checkpoint`. No
structural-type deletions.

Migration guidance for wallet/SDK consumers is in
`docs/docs-developers/docs/resources/migration_notes.md`.

## Changes

- **stdlib**: New query types (`CheckpointQuery`, `CheckpointsQuery`,
`ProposedCheckpointQuery`) + Zod schemas in `block/l2_block_source.ts`.
`'latest'` literal removed from `interfaces/checkpoint_parameter.ts`.
`NormalizedCheckpointDispatch` type for the server's parameter
normalizer. `ArchiverApiSchema` and `AztecNode` schema updated.
`computeL2ToL1MembershipWitness` switched to the new query shape.
- **archiver**: `data_source_base` adds `resolveCheckpointQuery` /
`resolveCheckpointsQuery` mirroring the block-side helpers, implements
the 4 confirmed methods plus the polymorphic proposed lookup.
`BlockStore` adds `getProposedCheckpointBySlot(slot)`. `MockArchiver`
and `mock_l2_block_source` updated to match the new interface.
- **aztec-node**: `server.ts` adds the confirmed→proposed fallback flow
with the two `BadRequestError` guards in `getCheckpoint`, sources all
tips from a single `getL2Tips()` call in `getCheckpointNumber`, and
routes the public RPC through the new internal methods. New
pure-projection helper `projectProposedToCheckpointResponse` in
`block_response_helpers.ts`.
- **consumer migrations**: prover-node (collapses two checkpoint fetches
into one `getCheckpoints({ epoch })`), world-state, slasher, sequencer
(`checkpoint_proposal_job`, `sequencer`), validator
(`proposal_handler`), `L2BlockStream`, pxe `block_stream_source`,
telemetry wrapper, and 10 e2e files updated to the new query shapes.
- **tests**: 48 new `it()` blocks covering each query discriminant, the
throw guards, the confirmed→proposed fallback, the polymorphic
`getProposedCheckpointData` dispatch, and
`BlockStore.getProposedCheckpointBySlot`.
- **docs**: `migration_notes.md` updated with the breaking changes for
downstream wallet/SDK consumers.
…oposal check (#22989)

## Motivation

`hasPayloadBeenProposed` (now `hasActiveProposalWithPayload`) used
`eth_getLogs` over the rollup's full L1 deployment range to find prior
`PayloadSubmitted` events. On long-lived rollups that range exceeds
typical RPC provider block-range caps and the call times out, silently
breaking the sequencer's "stop signaling for an already-proposed
payload" logic. The previous in-memory cache also permanently
blacklisted any payload it saw as proposed once, which is wrong: each
round on `EmpireBase` is independent and the same payload can
legitimately be re-signaled and re-submitted after a prior proposal
becomes Dropped/Rejected/Expired/Executed.

## Approach

Replace the log scan with a bounded view-call sweep over
`Governance.proposals`. The sweep walks newest -> oldest using
`proposalCount`, unwraps each proposal's `GSEPayload` via
`getOriginalPayload()`, and treats only
`Pending`/`Active`/`Queued`/`Executable` as "in an active proposal" --
terminal states allow re-signaling. The descent has a hard early-stop on
the protocol-wide proposal lifetime cap (`4 *
ConfigurationLib.TIME_UPPER = 360 days`), which is safe regardless of
per-proposal frozen configs because every config field is bounded by
`TIME_UPPER` on-chain. Two in-memory caches absorb the per-call cost
over time: terminal proposals (provably immutable on-chain) and wrapper
-> original payload unwraps (immutable bytecode).

## Changes

- **ethereum/contracts/governance**: New
`hasActiveProposalWithPayload(payload)` and `getProposalCount()` on
`ReadOnlyGovernanceContract`. Inlines a minimal `IProposerPayload` ABI
(just `getOriginalPayload`) to avoid generating a full artifact. Handles
`proposeWithLock`-style proposals (no GSEPayload wrapper) by catching
the unwrap revert and skipping.
- **ethereum/contracts/governance (types)**: Adds explicit types
(`Proposal`, `ProposalConfiguration`, `GovernanceConfiguration`,
`ProposeWithLockConfiguration`, `Ballot`) and maps the viem return
shapes of `getProposal` / `getConfiguration` onto them. `Proposal` now
carries both `cachedState` (raw stored) and `state` (live, time-derived
from `getProposalState`); `getProposal` issues both reads in parallel so
callers don't need a separate state RPC.
- **ethereum/contracts/governance (caching)**: Adds two memoization
layers on `ReadOnlyGovernanceContract`. Proposals are cached when
`state` is in any of the four terminal phases
(Executed/Rejected/Dropped/Expired) -- once terminal the entire struct
is provably immutable on-chain. Wrapper unwraps are keyed by wrapper
address and cached forever (deployed bytecode is immutable).
`GovernanceProposerContract` already memoizes its `getGovernance()`, so
the same `ReadOnlyGovernanceContract` instance (and its caches) is
reused across slots in the sequencer publisher.
- **ethereum/contracts/governance_proposer**: Drops the event-based
`hasPayloadBeenProposed`. Adds a memoized `getGovernance()` accessor and
a thin `hasActiveProposalWithPayload` delegate that resolves the
Governance address via the on-chain registry lookup.
- **ethereum/contracts/empire_base**: Removes `hasPayloadBeenProposed`
from `IEmpireBase` -- it's a Governance concern, not a generic empire
concern (slasher doesn't need it).
- **sequencer-client/publisher**: Removes the permanent
`payloadProposedCache` so the publisher re-checks every slot, allowing
re-signaling once a prior proposal is terminal. Switches the failure
mode from fail-closed to fail-open (a flaky L1 endpoint should not
silence governance participation; a duplicate signal is harmless).
Narrows the helper's `base` param from `IEmpireBase` to
`GovernanceProposerContract` since this code path is governance-only.
- **ethereum/contracts (tests)**: New `hasActiveProposalWithPayload`
describe block hitting a real anvil-deployed Governance. Impersonates
the `governanceProposer`, calls `Governance.propose` directly, and
etches hand-rolled mock wrapper bytecode at chosen addresses to drive
(wrapper, original) pairs. Covers: empty governance, live match, no
match, terminal state via warp, reverting wrapper
(proposeWithLock-style), descent past unrelated proposals,
case-insensitive match, and the 360-day hard cutoff via warp. Also adds
a sync-guard describe block that probes `Governance.updateConfiguration`
via impersonated `eth_call` to assert each of
`votingDelay`/`votingDuration`/`executionDelay`/`gracePeriod` accepts
`TIME_UPPER` and rejects `TIME_UPPER + 1` -- if those caps change
on-chain, this trips and `MAX_PROPOSAL_LIFETIME_SECONDS` must be
revisited.
- **sequencer-client/publisher (tests)**: Replaces the cache test with a
"re-checks each call so re-signaling resumes after terminal" test.
Updates the RPC-failure semantics test from fail-closed to fail-open.
…ile in CI (#23000)

## Summary

Fixes the `docs` build failure on `merge-train/spartan` (CI run
[25449092262](https://github.com/AztecProtocol/aztec-packages/actions/runs/25449092262),
log [27a4351a1e5e3568](http://ci.aztec-labs.com/27a4351a1e5e3568)).

## Problem

`validate-webapp-tutorial` in `docs/examples/bootstrap.sh` intentionally
starts each run with an empty `yarn.lock`, then runs `yarn install` to
populate it from the `link:` paths it just wrote into `package.json`. In
CI, Yarn 4 auto-enables `--immutable` when it detects `CI=1`, so the
install fails with `YN0028 (frozen lockfile exception)` because
populating an empty lockfile counts as modifying it.

```
➤ YN0028: │ The lockfile would have been modified by this install, which is explicitly forbidden.
➤ YN0000: · Failed with errors in 6s 829ms
ERROR: Contract artifact not found at /home/aztec-dev/aztec-packages/docs/target/pod_racing_contract-PodRacing.json
```

(The "Contract artifact not found" line is a downstream symptom — the
script doesn't run with `set -e`, so after `yarn install` fails it
continues into the artifact check and reports a misleading error.)

## Fix

Set `YARN_ENABLE_IMMUTABLE_INSTALLS=false` for that one `yarn install`
call, since populating the lockfile is the intended behaviour.

## Verification

Reproduced locally: `CI=true yarn install` against the webapp-tutorial
fails with `YN0028`; with `YARN_ENABLE_IMMUTABLE_INSTALLS=false` it
succeeds.

ClaudeBox log: https://claudebox.work/s/a1863de35053b544?run=1
@spalladino spalladino requested a review from a team as a code owner May 6, 2026 18:24
Copy link
Copy Markdown
Collaborator

@ludamad ludamad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Auto-approved

@AztecBot AztecBot added this pull request to the merge queue May 6, 2026
@AztecBot
Copy link
Copy Markdown
Collaborator Author

AztecBot commented May 6, 2026

🤖 Auto-merge enabled after 4 hours of inactivity. This PR will be merged automatically once all checks pass.

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks May 6, 2026
AztecBot and others added 6 commits May 7, 2026 04:17
…22994)

## Motivation

The `aztec.archiver.block_height` series with no status attribute
(rendered as the "Pending chain" line on the network, prover, and
fisherman Grafana dashboards) stopped being published a couple of weeks
ago. With pipelining enabled every checkpoint arriving from L1 already
has its blocks in the proposed store, so the L1 synchronizer always took
the new promotion fast path introduced in #22716, leaving
`checkpointsToAdd` empty and skipping the metric call.

## Approach

Record the checkpointed block-height metrics across all valid
checkpoints in the batch instead of only the ones routed through
`addCheckpoints`, so the promoted checkpoint contributes too. The
duration is averaged over the full batch since `addCheckpoints` performs
the work for both paths in a single transaction.

## Changes

- **archiver (`l1_synchronizer.ts`)**: Move the
`processNewCheckpointedBlocks` call to use `validCheckpoints` rather
than `checkpointsToAdd`, restoring the empty-status `block_height`,
`checkpoint_height`, `sync_block_count`, and `sync_per_checkpoint`
series under pipelining.

---------

Co-authored-by: Alex Gherghisan <alexghr@users.noreply.github.com>
It was only there for historical reasons and no prod validator could return "skipped".
See #22118 .
@fcarreiro fcarreiro requested a review from IlyasRidhuan as a code owner May 7, 2026 12:51
fcarreiro and others added 5 commits May 7, 2026 10:17
Given the current block building, validation and proving architecture, it is expected that nodes will always have to request TXs via the fast flow. This PR removes the slow flow and makes handling of mined L2 blocks use the fast flow.

Closes https://linear.app/aztec-labs/issue/A-1012/tx-collection-remove-slow-flow .
## Summary

- Adds a new spartan environment file
(`environments/next-net-clone.env`) for cloning the next-net deployment
configuration.
- Adds the corresponding `!environments/next-net-clone.env` allow-line
to `spartan/.gitignore` so the file isn't ignored.

## Context

Split out from the broader optimistic-proving work (#22990) so it can
land independently. Pure config; no code changes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants