This PR updates the bls contracts from our internal implementation which is an unmaintained fork of the kilic library to the gnark-crypto library that is actively maintained by consensys.
It also updates the gas-costs according to the EIP
This pull request defines a gentrie for snap sync purpose.
The stackTrie is used to generate the merkle tree nodes upon receiving a state batch. Several additional options have been added into stackTrie to handle incomplete states (either missing states before or after).
In this pull request, these options have been relocated from stackTrie to genTrie, which serves as a wrapper for stackTrie specifically for snap sync purposes.
Further, the logic for managing incomplete state has been enhanced in this change. Originally, there are two cases handled:
- boundary node filtering
- internal (covered by extension node) node clearing
This changes adds one more:
- Clearing leftover nodes on the boundaries.
This feature is necessary if there are leftover trie nodes in database, otherwise node inconsistency may break the state healing.
time.After is equivalent to NewTimer(d).C, and does not call Stop if the timer is no longer needed. This can cause memory leaks. This change changes many such occations to use NewTimer instead, and calling Stop once the timer is no longer needed.
* use generic atomic types in tx caches
* use generic atomic types in block caches
* eth/catalyst: avoid copying tx in test
---------
Co-authored-by: lmittmann <lmittmann@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This addresses an edge-case (detailed in the code comment) where the computation of the intermediate trie root would force the unnecessary resolution of a hash node. The change makes it so that when we process changes from a block, we first process trie-updates and afterwards process trie-deletions.
Here we add a Go API for running tracing plugins within the main block import process.
As an advanced user of geth, you can now create a Go file in eth/tracers/live/, and within
that file register your custom tracer implementation. Then recompile geth and select your tracer
on the command line. Hooks defined in the tracer will run whenever a block is processed.
The hook system is defined in package core/tracing. It uses a struct with callbacks, instead of
requiring an interface, for several reasons:
- We plan to keep this API stable long-term. The core/tracing hook API does not depend on
on deep geth internals.
- There are a lot of hooks, and tracers will only need some of them. Using a struct allows you
to implement only the hooks you want to actually use.
All existing tracers in eth/tracers/native have been rewritten to use the new hook system.
This change breaks compatibility with the vm.EVMLogger interface that we used to have.
If you are a user of vm.EVMLogger, please migrate to core/tracing, and sorry for breaking
your stuff. But we just couldn't have both the old and new tracing APIs coexist in the EVM.
---------
Co-authored-by: Matthieu Vachon <matthieu.o.vachon@gmail.com>
Co-authored-by: Delweng <delweng@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
Package filepath implements utility routines for manipulating filename paths in a way compatible with the target operating system-defined file paths.
Package path implements utility routines for manipulating slash-separated paths.
The path package should only be used for paths separated by forward slashes, such as the paths in URLs
* eth: drop support for forward sync triggers and head block packets
* consensus, eth: enforce always merged network
* eth: fix tx looper startup and shutdown
* cmd, core: fix some tests
* core: remove notion of future blocks
* core, eth: drop unused methods and types
As SELF-DESTRUCT opcode is disabled in the cancun fork(unless the
account is created within the same transaction, nothing to delete
in this case). The account will only be deleted in the following
cases:
- The account is created within the same transaction. In this case
the original storage was empty.
- The account is empty(zero nonce, zero balance, zero code) and
is touched within the transaction. Fortunately this kind of accounts
are not-existent on ethereum-mainnet.
All in all, after cancun, we are pretty sure there is no large contract
deletion and we don't need this mechanism for oom protection.
* core/txpool: no need to run rotate if no local txs
Signed-off-by: jsvisa <delweng@gmail.com>
* Revert "core/txpool: no need to run rotate if no local txs"
This reverts commit 17fab17388.
Signed-off-by: jsvisa <delweng@gmail.com>
* use Debug if todo is empty
Signed-off-by: jsvisa <delweng@gmail.com>
---------
Signed-off-by: jsvisa <delweng@gmail.com>
* make blobpool reject blob transactions with fee below the minimum
* core/txpool: some minot nitpick polishes and unified error formats
* core/txpool: do less big.Int constructions with the min blob cap
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This PR fixes an overflow which can could happen if inconsistent blockchain rules were configured. Additionally, it tries to prevent such inconsistencies from occurring by making sure that merge cannot be enabled unless previous fork(s) are also enabled.
* core/txpool, miner: speed up blob pool pending retrievals
* miner: fix test merge issue
* eth: same same
* core/txpool/blobpool: speed up blobtx creation in benchmark a bit
* core/txpool/blobpool: fix linter
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change makes the legacy transaction pool use of `uint256.Int` instead of `big.Int`. The changes are made primarily only on the internal functions of legacypool.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change adds support for blob-transaction in certain API-endpoints, e.g. eth_fillTransaction. A follow-up PR will add support for signing such transactions.
* all: implement era format, add history importer/export
* internal/era/e2store: refactor e2store to provide ReadAt interface
* internal/era/e2store: export HeaderSize
* internal/era: refactor era to use ReadAt interface
* internal/era: elevate anonymous func to named
* cmd/utils: don't store entire era file in-memory during import / export
* internal/era: better abstraction between era and e2store
* cmd/era: properly close era files
* cmd/era: don't let defers stack
* cmd/geth: add description for import-history
* cmd/utils: better bytes buffer
* internal/era: error if accumulator has more records than max allowed
* internal/era: better doc comment
* internal/era/e2store: rm superfluous reader, rm superfluous testcases, add fuzzer
* internal/era: avoid some repetition
* internal/era: simplify clauses
* internal/era: unexport things
* internal/era,cmd/utils,cmd/era: change to iterator interface for reading era entries
* cmd/utils: better defer handling in history test
* internal/era,cmd: add number method to era iterator to get the current block number
* internal/era/e2store: avoid double allocation during write
* internal/era,cmd/utils: fix lint issues
* internal/era: add ReaderAt func so entry value can be read lazily
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* internal/era: improve iterator interface
* internal/era: fix rlp decode of header and correctly read total difficulty
* cmd/era: fix rebase errors
* cmd/era: clearer comments
* cmd,internal: fix comment typos
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/txpool/blobpool: clean up resurrected junk after a crash
* core/txpool/blobpool: track transaction insertions and rejections
* core/txpool/blobpool: linnnnnnnt
This PR fixes an issues in the new simulated backend. The root cause is the fact that the transaction pool has an internal reset operation that runs on a background thread.
When a new transaction is added to the pool via the RPC, the transaction is added to a non-executable queue and will be moved to its final location on a background thread. If the machine is overloaded (or simply due to timing issues), it can happen that the simulated backend will try to produce the next block, whilst the pool has not yet marked the newly added transaction executable. This will cause the block to not contain the transaction. This is an issue because we want determinism from the simulator: add a tx, mine a block. It should be in there.
The PR fixes it by adding a Sync function to the txpool, which waits for the current reset operation (if any) to finish, and then runs an entire round of reset on top. The new round is needed because resets are only triggered by new head events, so newly added transactions will not trigger the outer resets that we can wait on. The transaction pool would eventually internally do a reset even on transaction addition, but there's no easy way to wait on that and there's no meaningful reason to bubble that across everything. A clean outer reset will at worse be a small noop goroutine.
This change switches from using the `Hasher` interface to add/query the bloomfilter to implementing it as methods.
This significantly reduces the allocations for Search and Rebloom.
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
This change simplifies the logic for indexing transactions and enhances the UX when transaction is not found by returning more information to users.
Transaction indexing is now considered as a part of the initial sync, and `eth.syncing` will thus be `true` if transaction indexing is not yet finished. API consumers can use the syncing status to determine if the node is ready to serve users.
The code to compute a versioned hash was duplicated a couple times, and also had a small
issue: if we ever change params.BlobTxHashVersion, it will most likely also cause changes
to the actual hash computation. So it's a bit useless to have this constant in params.
EIP-4844 adds a new transaction type for blobs. Users can submit such transactions via `eth_sendRawTransaction`. In this PR we refrain from adding support to `eth_sendTransaction` and in fact it will fail if the user passes in a blob hash.
However since the chain can handle such transactions it makes sense to allow simulating them. E.g. an L2 operator should be able to simulate submitting a rollup blob and updating the L2 state. Most methods that take in a transaction object should recognize blobs. The change boils down to adding `blobVersionedHashes` and `maxFeePerBlobGas` to `TransactionArgs`. In summary:
- `eth_sendTransaction`: will fail for blob txes
- `eth_signTransaction`: will fail for blob txes
The methods that sign txes does not, as of this PR, add support the for new EIP-4844 transaction types. Resuming the summary:
- `eth_sendRawTransaction`: can send blob txes
- `eth_fillTransaction`: will fill in a blob tx. Note: here we simply fill in normal transaction fields + possibly `maxFeePerBlobGas` when blobs are present. One can imagine a more elaborate set-up where users can submit blobs themselves and we fill in proofs and commitments and such. Left for future PRs if desired.
- `eth_call`: can simulate blob messages
- `eth_estimateGas`: blobs have no effect here. They have a separate unit of gas which is not tunable in the transaction.