eth/consensus : implement eccpow consensus engine#10
Open
mmingyeomm wants to merge 3262 commits intocryptoecc:worldlandfrom
Open
eth/consensus : implement eccpow consensus engine#10mmingyeomm wants to merge 3262 commits intocryptoecc:worldlandfrom
mmingyeomm wants to merge 3262 commits intocryptoecc:worldlandfrom
Conversation
…dRoot (#33486) Fix #33390 `setHeadBeyondRoot` was failing to invalidate finalized blocks because it compared against the original head instead of the rewound root. This fix updates the comparison to use the post-rewind block number, preventing the node from reporting a finalized block that no longer exists. Also added relevant test cases for it.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Adds missing trienode freezer case to InspectFreezerTable, making it consistent with InspectFreezer which already supports it. Co-authored-by: m6xwzzz <maskk.weller@gmail.com>
Add NewPayloadEvent to track engine API newPayload block processing times and report them to ethstats. This enables monitoring of block processing performance. https://notes.ethereum.org/@savid/block-observability related: #33231 --------- Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
Check out https://hackmd.io/dg7rizTyTXuCf2LSa2LsyQ for more details
This logging was too intensive at debug level, it is better to have it at trace level only. Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
### Description Add a new `OnStateUpdate` hook which gets invoked after state is committed. ### Rationale For our particular use case, we need to obtain the state size metrics at every single block when fuly syncing from genesis. With the current state sizer, whenever the node is stopped, the background process must be freshly initialized. During this re-initialization, it can skip some blocks while the node continues executing blocks, causing gaps in the recorded metrics. Using this state update hook allows us to customize our own data persistence logic, and we would never skip blocks upon node restart. --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Adds BlobTxType and SetCodeTxType to GasPrice switch case, aligning with `MaxFeePerGas` and `MaxPriorityFeePerGas` handling. Co-authored-by: m6xwzzz <maskk.weller@gmail.com>
It's a PR based on #33303 and introduces an approach for trienode history indexing. --- In the current archive node design, resolving a historical trie node at a specific block involves the following steps: - Look up the corresponding trie node index and locate the first entry whose state ID is greater than the target state ID. - Resolve the trie node from the associated trienode history object. A naive approach would be to store mutation records for every trie node, similar to how flat state mutations are recorded. However, the total number of trie nodes is extremely large (approximately 2.4 billion), and the vast majority of them are rarely modified. Creating an index entry for each individual trie node would be very wasteful in both storage and indexing overhead. To address this, we aggregate multiple trie nodes into chunks and index mutations at the chunk level instead. --- For a storage trie, the trie is vertically partitioned into multiple sub tries, each spanning three consecutive levels. The top three levels (1 + 16 + 256 nodes) form the first chunk, and every subsequent three-level segment forms another chunk. ``` Original trie structure Level 0 [ ROOT ] 1 node Level 1 [0] [1] [2] ... [f] 16 nodes Level 2 [00] [01] ... [0f] [10] ... [ff] 256 nodes Level 3 [000] [001] ... [00f] [010] ... [fff] 4096 nodes Level 4 [0000] ... [000f] [0010] ... [001f] ... [ffff] 65536 nodes Vertical split into chunks (3 levels per chunk) Level0 [ ROOT ] 1 chunk Level3 [000] ... [fff] 4096 chunks Level6 [000000] ... [fffffff] 16777216 chunks ``` Within each chunk, there are 273 nodes in total, regardless of the chunk's depth in the trie. ``` Level 0 [ 0 ] 1 node Level 1 [ 1 ] … [ 16 ] 16 nodes Level 2 [ 17 ] … … [ 272 ] 256 nodes ``` Each chunk is uniquely identified by the path prefix of the root node of its corresponding sub-trie. Within a chunk, nodes are identified by a numeric index ranging from 0 to 272. For example, suppose that at block 100, the nodes with paths `[]`, `[0]`, `[f]`, `[00]`, and `[ff]` are modified. The mutation record for chunk 0 is then appended with the following entry: `[100 → [0, 1, 16, 17, 272]]`, `272` is the numeric ID of path `[ff]`. Furthermore, due to the structural properties of the Merkle Patricia Trie, if a child node is modified, all of its ancestors along the same path must also be updated. As a result, in the above example, recording mutations for nodes `00` and `ff` alone is sufficient, as this implicitly indicates that their ancestor nodes `[]`, `[0]` and `[f]` were also modified at block 100. --- Query processing is slightly more complicated. Since trie nodes are indexed at the chunk level, each individual trie node lookup requires an additional filtering step to ensure that a given mutation record actually corresponds to the target trie node. As mentioned earlier, mutation records store only the numeric identifiers of leaf nodes, while ancestor nodes are omitted for storage efficiency. Consequently, when querying an ancestor node, additional checks are required to determine whether the mutation record implicitly represents a modification to that ancestor. Moreover, since trie nodes are indexed at the chunk level, some trie nodes may be updated frequently, causing their mutation records to dominate the index. Queries targeting rarely modified trie nodes would then scan a large amount of irrelevant index data, significantly degrading performance. To address this issue, a bitmap is introduced for each index block and stored in the chunk's metadata. Before loading a specific index block, the bitmap is checked to determine whether the block contains mutation records relevant to the target trie node. If the bitmap indicates that the block does not contain such records, the block is skipped entirely.
Fixes #33369 This omits "topics" and "addresses" from the filter when they are unspecified. It is required for interoperability with some server implementations that cannot handle `null` for these fields.
This pull request introduces a mechanism to compress trienode history by storing only the node diffs between consecutive versions. - For full nodes, only the modified children are recorded in the history; - For short nodes, only the modified value is stored; If the node type has changed, or if the node is newly created or deleted, the entire node value is stored instead. To mitigate the overhead of reassembling nodes from diffs during history reads, checkpoints are introduced by periodically storing full node values. The current checkpoint interval is set to every 16 mutations, though this parameter may be made configurable in the future.
Allow the blobpool to accept blobs out of nonce order Previously, we were dropping blobs that arrived out-of-order. However, since fetch decisions are done on receiver side, out-of-order delivery can happen, leading to inefficiencies. This PR: - adds an in-memory blob tx storage, similar to the queue in the legacypool - a limited number of received txs can be added to this per account - txs waiting in the gapped queue are not processed further and not propagated further until they are unblocked by adding the previos nonce to the blobpool The size of the in-memory storage is currently limited per account, following a slow-start logic. An overall size limit, and a TTL is also enforced for DoS protection. --------- Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
…cts (#33242) - pass `rpc.BlockNumberOrHash` directly to `eth_getBlockReceipts` so `requireCanonical` and other fields survive - aligns `BlockReceipts` with other `ethclient` methods and re-enables canonical-only receipt queries
This PR fixes an issue where `evm statetest` would not verify the post-state root hash if the test case expected an exception (e.g. invalid transaction). The fix involves: 1. Modifying `tests/state_test_util.go` in the `Run` method. 2. When an expected error occurs (`err != nil`), we now check if `post.Root` is defined. 3. If defined, we recalculate the intermediate root from the current state (which is reverted to the pre-transaction snapshot upon error). 4. We use `GetChainConfig` and `IsEIP158` to ensure the correct state clearing rules are applied when calculating the root, avoiding regressions on forks that require EIP-158 state clearing. 5. If the calculated root mismatches the expected root, the test now fails. This ensures that state tests are strictly verified against their expected post-state, even for failure scenarios. Fixes issue #33527 --------- Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>
The code was simply duplicate, so we can remove some code lines here. Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
This PR fixes an issue where the tx indexer would repeatedly try to “unindex” a block with a missing body, causing a spike in CPU usage. This change skips these blocks and advances the index tail. The fix was verified both manually on a local development chain and with a new test. resolves #33371
The coverage build path was generating go test commands with a bogus -tags flag that held the coverpkg value, so the run kept failing. I switched coverbuild to treat the optional argument as an override for -coverpkg and stopped passing coverpkg from the caller. Now the script emits a clean go test invocation that should actually succeed.
Updated the `avail` calculation to correctly compute remaining capacity: `buf.limit - len(buf.output)`, ensuring the buffer never exceeds its configured limit regardless of how many times `Write()` is called.
Co-authored-by: Gary Rong <garyrong0905@gmail.com> Co-authored-by: MariusVanDerWijden <m.vanderwijden@live.de>:
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Validate ciphertext length in decryptPreSaleKey, preventing runtime panics on invalid input.
…#33452) Add Open Telemetry tracing inside the RPC server to help attribute runtime costs within `handler.handleCall()`. In particular, it allows us to distinguish time spent decoding arguments, invoking methods via reflection, and actually executing the method and constructing/encoding JSON responses. --------- Co-authored-by: lightclient <lightclient@protonmail.com>
This PR adds support for the extraction of OpenTelemetry trace context from incoming JSON-RPC request headers, allowing geth spans to be linked to upstream traces when present. --------- Co-authored-by: lightclient <lightclient@protonmail.com>
The bitmap is used in compact-encoded trie nodes to indicate which elements have been modified. The bitmap format has been updated to use big-endian encoding. Bit positions are numbered from 0 to 15, where position 0 corresponds to the most significant bit of b[0], and position 15 corresponds to the least significant bit of b[1].
Remove a large amount of duplicate code from the tx_fetcher tests. --------- Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com> Co-authored-by: lightclient <lightclient@protonmail.com>
Rename the comment so it matches the helper name.
This PR relocates the witness statistics into the witness itself, making it more self-contained.
This PR enables the block validation of keeper in the womir/openvm zkvm. It also fixes some issues related to building the executables in CI. Namely, it activates the build which was actually disabled, and also resolves some resulting build conflicts by fixing the tags. Co-authored-by: Leo <leo@powdrlabs.com>
…4059) `pool.signer.Sender(tx)` bypasses the sender cache used by types.Sender, which can force an extra signature recovery for every promotable tx (promotion runs frequently). Use `types.Sender(pool.signer, tx)` here to keep sender derivation cached and consistent.
Later on we can consider making these limits configurable if the use-case arose.
We can consider making this limit configurable if ever the need arose.
In this PR, we add support for protocol version eth/70, defined by EIP-7975. Overall changes: - Each response is buffered in the peer’s receipt buffer when the `lastBlockIncomplete` field is true. - Continued request uses the same request id of its original request(`RequestPartialReceipts`). - Partial responses are verified in `validateLastBlockReceipt`. - Even if all receipts for partial blocks of the request are collected, those partial results are not sinked to the downloader, to avoid complexity. This assumes that partial response and buffering occur only in exceptional cases. --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com> Co-authored-by: Felix Lange <fjl@twurst.com>
Add persistent storage for Block Access Lists (BALs) in `core/rawdb/`. This provides read/write/delete accessors for BALs in the active key-value store. --------- Co-authored-by: Jared Wasinger <j-wasinger@hotmail.com> Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This is a breaking change in the opcode (structLog) tracer. Several fields will have a slight formatting difference to conform to the newly established spec at: ethereum/execution-apis#762. The differences include: - `memory`: words will have the 0x prefix. Also last word of memory will be padded to 32-bytes. - `storage`: keys and values will have the 0x prefix. --------- Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
This PR changes the blsync checkpoint init logic so that even if the initialization fails with a certain server and an error log message is printed, the server goes back to its initial state and is allowed to retry initialization after the failure delay period. The previous logic had an `ssDone` server state that did put the server in a permanently unusable state once the checkpoint init failed for an apparently permanent reason. This was not the correct behavior because different servers behave differently in case of overload and sometimes the response to a permanently missing item is not clearly distinguishable from an overload response. A safer logic is to never assume anything to be permanent and always give a chance to retry. The failure delay formula is also fixed; now it is properly capped at `maxFailureDelay`. The previous formula did allow the delay to grow unlimited if a retry was attempted immediately after each delay period.
Block overrides were to a great extent ignored by the gasestimator. This PR fixes that.
This PR implements the missing functionality for archive nodes by pruning stale index data. The current mechanism is relatively simple but sufficient for now: it periodically iterates over index entries and deletes outdated data on a per-block basis. The pruning process is triggered every 90,000 new blocks (approximately every 12 days), and the iteration typically takes ~30 minutes on a mainnet node. This mechanism is only applied with `gcmode=archive` enabled, having no impact on normal full node.
In this PR, the Database interface in `core/state` has been extended with one more function: ```go // Iteratee returns a state iteratee associated with the specified state root, // through which the account iterator and storage iterator can be created. Iteratee(root common.Hash) (Iteratee, error) ``` With this additional abstraction layer, the implementation details can be hidden behind the interface. For example, state traversal can now operate directly on the flat state for Verkle or binary trees, which do not natively support traversal. Moreover, state dumping will now prefer using the flat state iterator as the primary option, offering better efficiency. Edit: this PR also fixes a tiny issue in the state dump, marshalling the next field in the correct way.
Implement the snap/2 wire protocol with BAL serving --------- Co-authored-by: Gary Rong <garyrong0905@gmail.com>
…34649) This PR adds Bytes field back to GetAccesListsPacket
The trienode history indexing progress is also exposed via an RPC endpoint and contributes to the eth_syncing status.
👋
This PR makes it possible to run "Amsterdam" in statetests. I'm aware
that they'll be failing and not in consensus with other clients, yet,
but it's nice to be able to run tests and see what works and what
doesn't
Before the change:
```
$ go run ./cmd/evm statetest ./amsterdam.json
[
{
"name": "00000019-mixed-1",
"pass": false,
"fork": "Amsterdam",
"error": "unexpected error: unsupported fork \"Amsterdam\""
}
]
```
After
```
$ go run ./cmd/evm statetest ./amsterdam.json
{"stateRoot": "0x25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e"}
[
{
"name": "00000019-mixed-1",
"pass": false,
"stateRoot": "0x25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e",
"fork": "Amsterdam",
"error": "post state root mismatch: got 25b78260b76493a783c77c513125c8b0c5d24e058b4e87130bbe06f1d8b9419e, want 0000000000000000000000000000000000000000000000000000000000000000"
}
]
```
…ared pipeline into triedb/internal (#34654) This PR adds `GenerateTrie(db, scheme, root)` to the `triedb` package, which rebuilds all tries from flat snapshot KV data. This is needed by snap/2 sync so it can rebuild the trie after downloading the flat state. The shared trie generation pipeline from `pathdb/verifier.go` was moved into `triedb/internal/conversion.go` so both `GenerateTrie` and `VerifyState` reuse the same code.
This PR refactors the encoding rules for `AccessListsPacket` in the wire protocol. Specifically: - The response is now encoded as a list of `rlp.RawValue` - `rlp.EmptyString` is used as a placeholder for unavailable BAL objects
PathDB keys diff layers by state root, not by block hash. That means a side-chain block can legitimately collide with an existing canonical diff layer when both blocks produce the same post-state (for example same parent, same coinbase, no txs). Today `layerTree.add` blindly inserts that second layer. If the root already exists, this overwrites `tree.layers[root]` and appends the same root to the mutation lookup again. Later account/storage lookups resolve that root to the wrong diff layer, which can corrupt reads for descendant canonical states. At runtime, the corruption is silent: no error is logged and no invariant check fires. State reads against affected descendants simply return stale data from the wrong diff layer (for example, an account balance that reflects one fewer block reward), which can propagate into RPC responses and block validation. This change makes duplicate-root inserts idempotent. A second layer with the same state root does not add any new retrievable state to a tree that is already keyed by root; keeping the original layer preserves the existing parent chain and avoids polluting the lookup history with duplicate roots. The regression test imports a canonical chain of two layers followed by a fork layer at height 1 with the same state root but a different block hash. Before the fix, account and storage lookups at the head resolve the fork layer instead of the canonical one. After the fix, the duplicate insert is skipped and lookups remain correct.
Co-authored-by: Felix Lange <fjl@twurst.com>
…33884) Changes JSON serialization of FilterCriteria to exclude "address" when it is empty.
ProcessBeaconBlockRoot (EIP-4788) and processRequestsSystemCall (EIP-7002/7251) do not merge the EVM access events into the state after execution. ProcessParentBlockHash (EIP-2935) already does this correctly at line 290-291. Without this merge, the Verkle witness will be missing the storage accesses from the beacon root and request system calls, leading to incomplete witnesses and potential consensus issues when Verkle activates.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
implements eccpow consensus engine for Worldland Network