This change allows the creation of a genesis block for verkle testnets. This makes for a chunk of code that is easier to review and still touches many discussion points.
* core/vm: set basefee to 0 internally on eth_call
* core: nicer 0-basefee, make it work for blob fees too
* internal/ethapi: make tests a bit more complex
* core: fix blob fee checker
* core: make code a bit more readable
* core: fix some test error strings
* core/vm: Get rid of weird comment
* core: dict wrong typo
This change improves GenerateChain to support internal chain history access (ChainReader)
for the consensus engine and EVM.
GenerateChain takes a `parent` block and the number of blocks to create. With my changes,
the consensus engine and EVM can now access blocks from `parent` up to the block currently
being generated. This is required to make the BLOCKHASH instruction work, and also needed
to create real clique chains. Clique uses chain history to figure out if the current signer is in-turn,
for example.
I've also added some more accessors to BlockGen. These are helpful when creating transactions:
- g.Signer returns a signer instance for the current block
- g.Difficulty returns the current block difficulty
- g.Gas returns the remaining gas amount
Another fix in this commit concerns the receipts returned by GenerateChain. The receipts now
have properly derived fields (BlockHash, etc.) and should generally match what would be
returned by the RPC API.
This adds warning logs when the read does not match the expected count.
We can also remove the size limit since the function documentation explicitly states
that callers should limit the count.
This PR removes panics from stacktrie (mostly), and makes the Update return errors instead. While adding tests for this, I also found that one case of possible corruption was not caught, which is now fixed.
This change enhances the stacktrie constructor by introducing an option struct. It also simplifies the `Hash` and `Commit` operations, getting rid of the special handling round root node.
During snap-sync, we request ranges of values: either a range of accounts or a range of storage values. For any large trie, e.g. the main account trie or a large storage trie, we cannot fetch everything at once.
Short version; we split it up and request in multiple stages. To do so, we use an origin field, to say "Give me all storage key/values where key > 0x20000000000000000". When the server fulfils this, the server provides the first key after origin, let's say 0x2e030000000000000 -- never providing the exact origin. However, the client-side needs to be able to verify that the 0x2e03.. indeed is the first one after 0x2000.., and therefore the attached proof concerns the origin, not the first key.
So, short-short version: the left-hand side of the proof relates to the origin, and is free-standing from the first leaf.
On the other hand, (pun intended), the right-hand side, there's no such 'gap' between "along what path does the proof walk" and the last provided leaf. The proof must prove the last element (unless there are no elements).
Therefore, we can simplify the semantics for trie.VerifyRangeProof by removing an argument. This doesn't make much difference in practice, but makes it so that we can remove some tests. The reason I am raising this is that the upcoming stacktrie-based verifier does not support such fancy features as standalone right-hand borders.
This change addresses an issue in snap sync, specifically when the entire sync process can be halted due to an encountered empty storage range.
Currently, on the snap sync client side, the response to an empty (partial) storage range is discarded as a non-delivery. However, this response can be a valid response, when the particular range requested does not contain any slots.
For instance, consider a large contract where the entire key space is divided into 16 chunks, and there are no available slots in the last chunk [0xf] -> [end]. When the node receives a request for this particular range, the response includes:
The proof with origin [0xf]
A nil storage slot set
If we simply discard this response, the finalization of the last range will be skipped, halting the entire sync process indefinitely. The test case TestSyncWithUnevenStorage can reproduce the scenario described above.
In addition, this change also defines the common variables MaxAddress and MaxHash.
* cmd, core: resolve scheme from a read-write database
* cmd, core, eth: move the scheme check in the ethereum constructor
* cmd/geth: dump should in ro mode
* cmd: reverts
This change
- Removes the owner-notion from a stacktrie; the owner is only ever needed for comitting to the database, but the commit-function, the `writeFn` is provided by the caller, so the caller can just set the owner into the `writeFn` instead of having it passed through the stacktrie.
- Removes the `encoding.BinaryMarshaler`/`encoding.BinaryUnmarshaler` interface from stacktrie. We're not using it, and it is doubtful whether anyone downstream is either.
Adding a space beween function opOrigin() and opcCaller() in instruciton.go.
Adding a space beween function opkeccak256() and opAddress() in instruciton.go.
When MatcherSession encounters an error, it attempts to close the session.
Closing waits for all goroutines to finish, including the 'distributor'. However, the
distributor will not exit until all requests have returned.
This patch fixes the issue by delivering the (empty) result to the distributor
before calling Close().
* cmd/evm: improve flags handling
This fixes some issues with flags in cmd/evm. The supported flags did not
actually show up in help output because they weren't categorized. I'm also
adding the VM-related flags to the run command here so they can be given
after the subcommand name. So it can be run like this now:
./evm run --code 6001 --debug
* cmd/evm: enable all forks by default in run command
The default genesis was just empty with no forks at all, which is annoying because
contracts will be relying on opcodes introduced in a fork. So this changes the default to
have all forks enabled.
* core/asm: fix some issues in the assembler
This fixes minor bugs in the old assembler:
- It is now possible to have comments on the same line as an instruction.
- Errors for invalid numbers in the jump instruction are reported better
- Line numbers in errors were off by one
* rlp/rlpgen: remove build tag
This tag was supposed to prevent unstable output when types reference each other. Imagine
there are two struct types A and B, where a reference to type B is in A. If I run rlpgen
on type B first, and then on type A, the generator will see the B.EncodeRLP method and
call it. However, if I run rlpgen on type A first, it will inline the encoding of B.
The solution I chose for the initial release of rlpgen was to just ignore methods
generated by rlpgen using a build tag. But there is a problem with this: if any code in
the package calls EncodeRLP explicitly, the package can't be loaded without errors anymore
in rlpgen, because the loader ignores it. Would be nice if there was a way to just make it
ignore invalid functions during type checking (they're not necessary for rlpgen), but
golang.org/x/tools/go/packages does not provide a way of ignoring them.
Luckily, the types we use rlpgen with do not reference each other right now, so we can
just remove the build tags for now.
This change includes a lot of things, listed below.
### Split up interfaces, write vs read
The interfaces have been split up into one write-interface and one read-interface, with `Snapshot` being the gateway from write to read. This simplifies the semantics _a lot_.
Example of splitting up an interface into one readonly 'snapshot' part, and one updatable writeonly part:
```golang
type MeterSnapshot interface {
Count() int64
Rate1() float64
Rate5() float64
Rate15() float64
RateMean() float64
}
// Meters count events to produce exponentially-weighted moving average rates
// at one-, five-, and fifteen-minutes and a mean rate.
type Meter interface {
Mark(int64)
Snapshot() MeterSnapshot
Stop()
}
```
### A note about concurrency
This PR makes the concurrency model clearer. We have actual meters and snapshot of meters. The `meter` is the thing which can be accessed from the registry, and updates can be made to it.
- For all `meters`, (`Gauge`, `Timer` etc), it is assumed that they are accessed by different threads, making updates. Therefore, all `meters` update-methods (`Inc`, `Add`, `Update`, `Clear` etc) need to be concurrency-safe.
- All `meters` have a `Snapshot()` method. This method is _usually_ called from one thread, a backend-exporter. But it's fully possible to have several exporters simultaneously: therefore this method should also be concurrency-safe.
TLDR: `meter`s are accessible via registry, all their methods must be concurrency-safe.
For all `Snapshot`s, it is assumed that an individual exporter-thread has obtained a `meter` from the registry, and called the `Snapshot` method to obtain a readonly snapshot. This snapshot is _not_ guaranteed to be concurrency-safe. There's no need for a snapshot to be concurrency-safe, since exporters should not share snapshots.
Note, though: that by happenstance a lot of the snapshots _are_ concurrency-safe, being unmutable minimal representations of a value. Only the more complex ones are _not_ threadsafe, those that lazily calculate things like `Variance()`, `Mean()`.
Example of how a background exporter typically works, obtaining the snapshot and sequentially accessing the non-threadsafe methods in it:
```golang
ms := metric.Snapshot()
...
fields := map[string]interface{}{
"count": ms.Count(),
"max": ms.Max(),
"mean": ms.Mean(),
"min": ms.Min(),
"stddev": ms.StdDev(),
"variance": ms.Variance(),
```
TLDR: `snapshots` are not guaranteed to be concurrency-safe (but often are).
### Sample changes
I also changed the `Sample` type: previously, it iterated the samples fully every time `Mean()`,`Sum()`, `Min()` or `Max()` was invoked. Since we now have readonly base data, we can just iterate it once, in the constructor, and set all four values at once.
The same thing has been done for runtimehistogram.
### ResettingTimer API
Back when ResettingTImer was implemented, as part of https://github.com/ethereum/go-ethereum/pull/15910, Anton implemented a `Percentiles` on the new type. However, the method did not conform to the other existing types which also had a `Percentiles`.
1. The existing ones, on input, took `0.5` to mean `50%`. Anton used `50` to mean `50%`.
2. The existing ones returned `float64` outputs, thus interpolating between values. A value-set of `0, 10`, at `50%` would return `5`, whereas Anton's would return either `0` or `10`.
This PR removes the 'new' version, and uses only the 'legacy' percentiles, also for the ResettingTimer type.
The resetting timer snapshot was also defined so that it would expose the internal values. This has been removed, and getters for `Max, Min, Mean` have been added instead.
### Unexport types
A lot of types were exported, but do not need to be. This PR unexports quite a lot of them.
On ACD 163, it was agreed to bump the target and max blob values from `2/4` to `3/6` for future devnets until we could decide on final mainnet number. This change contains said update, making master pass all the hive tests. The final decision for mainnet cancun is still to be made.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
* core/forkid: skip genesis forks by time
* core/forkid: add comment about skipping non-zero fork times
* core/forkid: skip all time based forks in genesis using loop
* core/forkid: simplify logic for dropping time-based forks
This chang creates a GaugeInfo metrics type for registering informational (textual) metrics, e.g. geth version number. It also improves the testing for backend-exporters, and uses a shared subpackage in 'internal' to provide sample datasets and ordered registry.
Implements #21783
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes implements faster post-selfdestruct iteration of storage slots for deletion, by using snapshot-storage+stacktrie to recover the trienodes to be deleted. This mechanism is only implemented for path-based schema.
For hash-based schema, the entire post-selfdestruct storage iteration is skipped, with this change, since hash-based does not actually perform deletion anyway.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR makes EIP-4788 work in the engine API and miner. It also fixes some bugs related to
EIP-4844 block processing and mining. Changes in detail:
- Header.BeaconRoot has been renamed to ParentBeaconRoot.
- The engine API now implements forkchoiceUpdatedV3
- newPayloadV3 method has been updated with the parentBeaconBlockRoot parameter
- beacon root is now applied to new blocks in miner
- For EIP-4844, block creation now updates the blobGasUsed field of the header
Just some minor optimizations I figured out a while ago. By using ReadBytes instead of
Bytes on the rlp stream, we can save the allocation of a temporary buffer for the typed tx
payload.
If kind == rlp.Byte, the size reported by Stream.Kind will be zero, but we need a buffer
of size 1 for ReadBytes. Since typed txs always have to be longer than 1 byte, we can just
return an error for kind == rlp.Byte.
There is a also a small change for Log: since the first three fields of Log are the ones that
should appear in the canon encoding, we can simply ignore the remaining fields via
struct tag. Doing this removes an indirection through the rlpLog type.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change implements "EIP 4788 : Beacon block root in the EVM". It implements version-2 of EPI-4788, main difference being that the contract is an actual contract rather than a precompile, as in #27289.
Currently, we trigger the logic to (un)index transactions when the node receives a new
block. However, in some cases the node may not receive new blocks (eg, when the Geth node
is configured without peer discovery, or when it acts as an RPC node for historical-only
data).
In these situations, the Geth node user may not have previously configured txlookuplimit
(i.e. the default of around one year), but later realizes they need to index all
historical blocks. However, adding txlookuplimit=0 and restarting geth has no effect. This
change makes it check for required indexing work once, on startup, to fix the issue.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes the forkID calculation to ignore time-based forks that occurred before the
genesis block. It's supposed to be done this way because the spec says:
> If a chain is configured to start with a non-Frontier ruleset already in its genesis, that is NOT considered a fork.
This PR removes the newly added txpool.Transaction wrapper type, and instead adds a way
of keeping the blob sidecar within types.Transaction. It's better this way because most
code in go-ethereum does not care about blob transactions, and probably never will. This
will start mattering especially on the client side of RPC, where all APIs are based on
types.Transaction. Users need to be able to use the same signing flows they already
have.
However, since blobs are only allowed in some places but not others, we will now need to
add checks to avoid creating invalid blocks. I'm still trying to figure out the best place
to do some of these. The way I have it currently is as follows:
- In block validation (import), txs are verified not to have a blob sidecar.
- In miner, we strip off the sidecar when committing the transaction into the block.
- In TxPool validation, txs must have a sidecar to be added into the blobpool.
- Note there is a special case here: when transactions are re-added because of a chain
reorg, we cannot use the transactions gathered from the old chain blocks as-is,
because they will be missing their blobs. This was previously handled by storing the
blobs into the 'blobpool limbo'. The code has now changed to store the full
transaction in the limbo instead, but it might be confusing for code readers why we're
not simply adding the types.Transaction we already have.
Code changes summary:
- txpool.Transaction removed and all uses replaced by types.Transaction again
- blobpool now stores types.Transaction instead of defining its own blobTx format for storage
- the blobpool limbo now stores types.Transaction instead of storing only the blobs
- checks to validate the presence/absence of the blob sidecar added in certain critical places
The Go authors updated golang/x/ext to change the function signature of the slices sort method.
It's an entire shitshow now because x/ext is not tagged, so everyone's codebase just
picked a new version that some other dep depends on, causing our code to fail building.
This PR updates the dep on our code too and does all the refactorings to follow upstream...
This change removes a chainconfig parameter passed into rawdb.ReadLogs, which is not used nor needed.
It also modifies the filter loop slightly, avoiding a labeled break and instead using a method.
This change does not modify any behaviour.
Context: The UpdateContractCode method was introduced for the state storage commitment
schemes that include the whole code for their commitment computation. It must therefore be called
before the root hash is computed at the end of IntermediateRoot.
This should have no impact on the MPT since, in this context, the method is a no-op.
This adds support for the "yParity" field in transaction objects returned by RPC
APIs. We somehow forgot to add this field even though it has been in the spec for
a long time.
This change rearranges the accessor methods in block.go and fixes some minor issues with
the copy-on-write logic of block data. Fixed issues:
- Block.WithWithdrawals did not create a shallow copy of the block.
- Block.WithBody copied the header unnecessarily, and did not preserve withdrawals.
However, the bugs did not affect any code in go-ethereum because blocks are *always*
created using NewBlockWithHeader().WithBody().WithWithdrawals()
* all: implement path-based state scheme
* all: edits from review
* core/rawdb, trie/triedb/pathdb: review changes
* core, light, trie, eth, tests: reimplement pbss history
* core, trie/triedb/pathdb: track block number in state history
* trie/triedb/pathdb: add history documentation
* core, trie/triedb/pathdb: address comments from Peter's review
Important changes to list:
- Cache trie nodes by path in clean cache
- Remove root->id mappings when history is truncated
* trie/triedb/pathdb: fallback to disk if unexpect node in clean cache
* core/rawdb: fix tests
* trie/triedb/pathdb: rename metrics, change clean cache key
* trie/triedb: manage the clean cache inside of disk layer
* trie/triedb/pathdb: move journal function
* trie/triedb/path: fix tests
* trie/triedb/pathdb: fix journal
* trie/triedb/pathdb: fix history
* trie/triedb/pathdb: try to fix tests on windows
* core, trie: address comments
* trie/triedb/pathdb: fix test issues
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core: check excessBlobGas in front
Signed-off-by: jsvisa <delweng@gmail.com>
* core: no need to manual panic
Signed-off-by: jsvisa <delweng@gmail.com>
* core: no comment
Signed-off-by: jsvisa <delweng@gmail.com>
---------
Signed-off-by: jsvisa <delweng@gmail.com>
* core/types: add data gas fields in Receipt
* core/types: use BlobGas method of tx
* core: fix test
* core/types: fix receipt tests, add data gas used field test
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/blobpool: implement txpool for blob txs
* core/txpool: track address reservations to notice any weird bugs
* core/txpool/blobpool: add support for in-memory operation for tests
* core/txpool/blobpool: fix heap updating after SetGasTip if account is evicted
* core/txpool/blobpool: fix eviction order if cheap leading txs are included
* core/txpool/blobpool: add note as to why the eviction fields are not inited in reinject
* go.mod: pull in inmem billy form upstream
* core/txpool/blobpool: fix review commens
* core/txpool/blobpool: make heap and heap test deterministic
* core/txpool/blobpool: luv u linter
* core/txpool: limit blob transactions to 16 per account
* core/txpool/blobpool: fix rebase errors
* core/txpool/blobpool: luv you linter
* go.mod: revert some strange crypto package dep updates
EIP-6780: SELFDESTRUCT only in same transaction
> SELFDESTRUCT will recover all funds to the caller but not delete the account, except when called in the same transaction as creation
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This updates the reference tests to the latest version and also adds logic
to process EIP-4844 blob transactions into the state transition. We are now
passing most Cancun fork tests.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
This change adds the ability to perform reads from freezer without size limitation. This can be useful in cases where callers are certain that out-of-memory will not happen (e.g. reading only a few elements).
The previous API was designed to behave both optimally and secure while servicing a request from a peer, whereas this change should _not_ be used when an untrusted peer can influence the query size.
This change makes the StateDB track the state key value diff of a block transition.
We already tracked current account and storage values for the purpose of updating
the state snapshot. With this PR, we now also track the original (pre-transition) values
of accounts and storage slots.
Implements [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656), MCOPY instruction, and enables it for Cancun.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Back before #27178 , we spun up a number of ethash verifiers to verify headers. So we also had tests to ensure that we were indeed able to abort verification even if we had multiple workers running.
With PR #27178, we removed the parallelism in verification, and these tests are now failing, since we now just sequentially fire away the results as fast as possible on one routine.
This change removes the (sometimes failing) tests
This change adds back the 'geth --dev' mode of operation, using a cl-mocker.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
The clean trie cache is persisted periodically, therefore Geth can
quickly warmup the cache in next restart.
However it will reduce the robustness of system. The assumption is
held in Geth that if the parent trie node is present, then the entire
sub-trie associated with the parent are all prensent.
Imagine the scenario that Geth rewinds itself to a past block and
restart, but Geth finds the root node of "future state" in clean
cache then regard this state is present in disk, while is not in fact.
Another example is offline pruning tool. Whenever an offline pruning
is performed, the clean cache file has to be removed to aviod hitting
the root node of "deleted states" in clean cache.
All in all, compare with the minor performance gain, system robustness
is something we care more.
* core/state, light, les: make signature of ContractCode hash-independent
* push current state for feedback
* les: fix unit test
* core, les, light: fix les unittests
* core/state, trie, les, light: fix state iterator
* core, les: address comments
* les: fix lint
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Verkle trees store the code inside the trie. This PR changes the interface to pass the code, as well as the dirty flag to tell the trie package if the code is dirty and needs to be updated. This is a no-op for the MPT and the odr trie.
The state availability is checked during the creation of a state reader.
- In hash-based database, if the specified root node does not exist on disk disk, then
the state reader won't be created and an error will be returned.
- In path-based database, if the specified state layer is not available, then the
state reader won't be created and an error will be returned.
This change also contains a stricter semantics regarding the `Commit` operation: once it has been performed, the trie is no longer usable, and certain operations will return an error.
This removes the feature where top nodes of the proof can be elided.
It was intended to be used by the LES server, to save bandwidth
when the client had already fetched parts of the state and only needed
some extra nodes to complete the proof. Alas, it never got implemented
in the client.
* go.mod: update kzg libraries to use big-endian
* go.sum: ran go mod tidy
* core/testdata/precompiles: fix blob verification test
* core/testdata/precompiles: fix blob verification test
This change ensures Reheap will be called even before the London fork activates.
Since Reheap would otherwise only be called through `SetBaseFee` after London,
the list would just keep growing if the fork was not enabled or not reached yet.
* all: move main transaction pool into a subpool
* go.mod: remove superfluous updates
* core/txpool: review fixes, handle txs rejected by all subpools
* core/txpool: typos
The logs in this function are pulled straight from disk in rawdb.ReadRawReceipts and
also modified in receipts.DeriveFields, so removing the copy should be fine.
* core/txpool: abstraction prep work for secondary pools (blob pool)
* core/txpool: leave subpool concepts to a followup pr
* les: fix tests using hard coded errors
* core/txpool: use bitmaps instead of maps for tx type filtering
This changes the journal logic to mark the state object dirty immediately when it
is reset.
We're mostly adding this change to appease the fuzzer. Marking it dirty immediately
makes no difference in practice because accounts will always be modified by EVM
right after creation.
Continuing with a series of PRs to make the Trie interface more generic, this PR moves
the RLP encoding of storage slots inside the StateTrie and light.Trie implementations,
as other types of tries don't use RLP.
This PR adds a staleness-check to AccountRLP, before checking the bloom-filter and potentially going directly into the disklayer.
---------
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Regenerate receipt json code to remove omit empty. Previously, there was a discrepancy between the generated code and the source.
---------
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/utils, node: switch to Pebble as the default db if none exists
* node: fall back to LevelDB on platforms not supporting Pebble
* core/rawdb, node: default to Pebble at the node level
* cmd/geth: fix some tests explicitly using leveldb
* ethdb/pebble: allow double closes, makes tests simpler
In this PR, all TryXXX(e.g. TryGet) APIs of trie are renamed to XXX(e.g. Get) with an error returned.
The original XXX(e.g. Get) APIs are renamed to MustXXX(e.g. MustGet) and does not return any error -- they print a log output. A future PR will change the behaviour to panic on errorrs.
The EIP150Hash was an idea where, after the fork, we hardcoded the forked hash as an extra defensive mechanism. It wasn't really used, since forks weren't contentious and for all the various testnets and private networks it's been a hassle to have around.
This change removes that config field.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
This PR unifies the error handling in miner.
Whenever an error occur while applying a transaction, the transaction should be regarded as invalid and all following transactions from the same sender not executable because of the nonce restriction. The only exception is the `nonceTooLow` error which is handled separately.
Prior to this change, it was possible that transactions are erroneously deemed as 'future' although they are in fact 'pending', causing them to be dropped due to 'future' not being allowed to replace 'pending'.
This change fixes that, by doing a more in-depth inspection of the queue.
This PR removes the Debug field from vmconfig, making it so that if a tracer is set, debug=true is implied.
---------
Co-authored-by: 0xTylerHolmes <tyler@ethereum.org>
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Currently, most of transaction validation while holding the txpool mutex: one exception being an early-on signature check.
This PR changes that, so that we do all non-stateful checks before we entering the mutex area. This means they can be performed in parallel, and to enable that, certain fields have been made atomic bools and uint64.
This change renames StateTrie methods to remove the Try* prefix.
We added the Trie methods with prefix 'Try' a long time ago, working
around the problem that most existing methods of Trie did not return the
database error. This weird naming convention has persisted until now.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This changes the Trie interface to add the plain account address as a
parameter to all storage-related methods.
After the introduction of the TryAccount* functions, TryGet, TryUpdate and
TryDelete are now only meant to read an account's storage. In their current
form, they assume that an account storage is stored in a separate trie, and
that the hashing of the slot is independent of its account's address.
The proposed structure for a stateless storage breaks these two
assumptions: the hashing of a slot key requires the address and all slots
and accounts are stored in a single trie.
This PR therefore adds an address parameter to the interface. It is ignored
in the MPT version, so this change has no functional impact, however it
will reduce the diff size when merging verkle trees.
The meter for "for measuring the effective amount of data read" within the freezertable was never updated. This change remedies that.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
When interacting with geth as a library to e.g. produce state tests, it is desirable to obtain the consensus-correct jumptable definition for a given fork. This changes adds accessors so the instructionset can be obtained and characteristics about opcodes can be inspected.
Makes clear the distinction between Finalize and FinalizedAndAssemble:
- In Finalize function, a series of state operations are applied according to consensus rules. The statedb is mutated and the root hash can be checked and compared afterwards.
This function should be used in block processing(receive afrom network and apply it locally) but not block generation.
- In FinalizeAndAssemble function, after applying state mutations, the block is also to be assembled with the latest
state root computed, updating the header.
This function should be used in block generation only.
This adds two new rules to the transaction pool:
- A future transaction can not evict a pending transaction.
- A transaction can not overspend available funds of a sender.
---
Co-authored-by: dwn1998 <42262393+dwn1998@users.noreply.github.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Here, the core.Message interface turns into a plain struct and
types.Message gets removed.
This is a breaking change to packages core and core/types. While we do
not promise API stability for package core, we do for core/types. An
exception can be made for types.Message, since it doesn't have any
purpose apart from invoking the state transition in package core.
types.Message was also marked deprecated by the same commit it
got added in, 4dca5d4db7 (November 2016).
The core.Message interface was added in December 2014, in commit
db494170dc, for the purpose of 'testing' state transitions. It's the
same change that made transaction struct fields private. Before that,
the state transition used *types.Transaction directly.
Over time, multiple implementations of the interface accrued across
different packages, since constructing a Message is required whenever
one wants to invoke the state transition. These implementations all
looked very similar, a struct with private fields exposing the fields
as accessor methods.
By changing Message into a struct with public fields we can remove all
these useless interface implementations. It will also hopefully
simplify future changes to the type with less updates to apply across
all of go-ethereum when a field is added to Message.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes the test to match the comment description. Using timestampedConfig in this test case is incorrect, the comment says 'local is at Gray Glacier' and isn't aware of more forks.
This change prints out more information about the problem, in the case where geth detects a gap between leveldb and ancients, so we can determine more exactly where the gap is (what the first missing is). Also prints out more metadata.
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change fixes a flaw where, in certain scenarios, the block sealer did not accurately reset the remaining gas after failing to include an invalid transaction. Fixes#26791
Fixes a race in TestNewPayloadOnInvalidTerminalBlock where setting the TTD raced with
the miner. Solution: set the TTD on the blockchain config not the genesis config.
Also fixes a race in CopyHeader which resulted in race reports all over the place.
This change adds a struct field EffectiveGasPrice in types.Receipt. The field is present
in RPC responses, but not in the Go struct, and thus can't easily be accessed via ethclient.
Co-authored-by: PulsarAI <dev@pulsar-systems.fi>
This fixes an issue where the withdrawal index was not calculated correctly
for multiple withdrawals in a single block.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The EmptyRootHash and EmptyCodeHash are defined everywhere in the codebase, this PR replaces all of them with unified one defined in core/types package, and also defines constants for TxRoot, WithdrawalsRoot and UncleRoot
This PR contains a small portion of the full pbss PR, namely
Remove the tracer from trie (and comitter), and instead using an accessList.
Related changes to the Nodeset.
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This PR is a (superior) alternative to https://github.com/ethereum/go-ethereum/pull/26708, it handles deprecation, primarily two specific cases.
`rand.Seed` is typically used in two ways
- `rand.Seed(time.Now().UnixNano())` -- we seed it, just to be sure to get some random, and not always get the same thing on every run. This is not needed, with global seeding, so those are just removed.
- `rand.Seed(1)` this is typically done to ensure we have a stable test. If we rely on this, we need to fix up the tests to use a deterministic prng-source. A few occurrences like this has been replaced with a proper custom source.
`rand.Read` has been replaced by `crypto/rand`.`Read` in this PR.
This PR relaxes the block body ingress handling a bit: if block body withdrawals are missing (but expected to be empty), the body withdrawals are set to 'empty list' before being passed to upper layers.
This fixes an issue where a block passed from EthereumJS to geth was deemed invalid.
Logs stored on disk have minimal information. Contextual information such as block
number, index of log in block, index of transaction in block are filled in upon request.
We can fill in all these fields only having the block header and list of receipts.
But determining the transaction hash of a log requires the block body.
The goal of this PR is postponing this retrieval until we are sure we the transaction hash.
It happens often that the header bloom filter signals there might be matches in a block,
but after actually checking them reveals the logs do not match. We want to avoid fetching
the body in this case.
Note that this changes the semantics of Backend.GetLogs. Downstream callers of
GetLogs now assume log context fields have not been derived, and need to call
DeriveFields on the logs if necessary.
This is a breaking change in the tracing hooks API as well as semantics of the callTracer:
- CaptureEnter hook provided a nil value argument in case of DELEGATECALL. However to stay consistent with how delegate calls behave in EVM this hook is changed to pass in the value of the parent call.
- callTracer will return parent call's value for DELEGATECALL frames.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* common, core, eth, les, trie: make prque generic
* les/vflux/server: fixed issues in priorityPool
* common, core, eth, les, trie: make priority also generic in prque
* les/flowcontrol: add test case for priority accumulator overflow
* les/flowcontrol: avoid priority value overflow
* common/prque: use int priority in some tests
No need to convert to int64 when we can just change the type used by the
queue.
* common/prque: remove comment about int64 range
---------
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This change ports some changes from the main PBSS PR:
- get rid of callback function in `trie.Database.Commit` which is not required anymore
- rework the `nodeResolver` in `trie.Iterator` to make it compatible with multiple state scheme
- some other shallow changes in tests and typo-fixes
This PR moves core/beacon to beacon/engine so that beacon-chain related code has its own top level package which also can house the the beacon lightclient-code.
This PR moves some trie-related db accessor methods to a different file, and also removes the schema type. Instead of the schema type, a string is used to distinguish between hashbased/pathbased db accessors.
This also moves some code from trie package to rawdb package.
This PR is intended to be a no-functionality-change prep PR for #25963 .
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change improves reusability of the EVM struct. Two methods are added:
- SetBlockContext(...)
- SetTracer(...)
Other attributes like the TransactionContext and the StateDB can already be updated.
BlockContext and Tracer are partially not updateable right now. This change fixes it and
opens the potential to reuse an EVM struct in more ways.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change implements withdrawals as specified in EIP-4895.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: marioevz <marioevz@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR changes the API so that uint64 is used for fork timestamps.
It's a good choice because types.Header also uses uint64 for time.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR does a few things.
It fixes a shutdown-order flaw in the chainfreezer. Previously, the chain-freezer would shutdown the freezer backend first, and then signal for the loop to exit. This can lead to a scenario where the freezer tries to fsync closed files, which is an error-conditon that could lead to exit via log.Crit.
It also makes the printout more detailed when truncating 'dangling' items, by showing the exact number instead of approximate MB.
This PR also adds calls to fsync files before closing them, and also makes the `db inspect` command slightly more robust.
This PR fixes an issue which might result in data lost in freezer.
Whenever mutation happens in freezer, all data will be written into head data file
and it will be rotated with a new one in case the size of file reaches the threshold.
Theoretically, the rotated old data file should be fsync'd to prevent data loss.
In freezer.Sync function, we only fsync: (1) index file (2) meta file and (3) head
data file. So this PR forcibly fsync the head data file if mutation happens in the
boundary of data file.
Implementation of https://eips.ethereum.org/EIPS/eip-3860, limit and meter initcode. This PR enables EIP-3860 as part of the Shanghai fork.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This makes non-JS tracers execute all block txs on a single goroutine.
In the previous implementation, we used to prepare every tx pre-state
on one goroutine, and then run the transactions again with tracing enabled.
Native tracers are usually faster, so it is faster overall to use their output as
the pre-state for tracing the next transaction.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR removes the notion of fakeStorage from the state objects, and instead, for any state modifications that are needed, it simply makes the changes.
This changes moves the tracking of "deleted in this block" out from snap-only domain, so that it happens regardless of whether the execution is snapshot-backed or trie-backed.
This changes the StorageTrie method to return an error when the trie
is not available. It used to return an 'empty trie' in this case, but that's
not possible anymore under PBSS.
This PR builds on #26299, but also updates the tests to the most recent version, which includes tests regarding TheMerge.
This change adds checks to the beacon consensus engine, making it more strict in validating the pre- and post-headers, and not relying on the caller to have already correctly sanitized the headers/blocks.
This PR implements resettable freezer by adding a ResettableFreezer wrapper.
The resettable freezer wraps the original freezer in a way that makes it possible to ensure atomic resets. Implementation wise, it relies on the os.Rename and os.RemoveAll to atomically delete the original freezer data and re-create a new one from scratch.
A comment suggests that contract creation happens if the recipient of a call is 0x00..00 ("zero address") but in fact the sender must be nil. The zero address is a regular valid address that is commonly used as a "burn" address.
While investigating another issue, I found that all callers of collectLogs have the
complete block available. rawdb.ReadReceipts loads the block from the database,
so it is better to use ReadRawReceipts here, and derive the receipt information using
the block which is already in memory.
This PR makes it possible to modify the flush interval time via RPC. On one extreme, `0s`, it would act as an archive node. If set to `1h`, means that after one hour of effective block processing time, the trie would be flushed. If one block takes 200ms, this means that a flush would occur every `5*3600=18000` blocks -- however, if the memory size of the cached states grows too large, it will flush sooner.
Essentially, this makes it possible to configure the node to be more or less "archive:ish", and without restarting the node while reconfiguring it.
The gcproc field tracks the amount of time spent processing blocks,
and is used to trigger a state flush to disk when a certain threshold is
reached. After the merge, single block insertion by CL is the most
common source of block processing time, but this time was not added
into gcproc.
This removes the 'time' field from logs, as well as from the tracer interface. This change makes the trace output deterministic. If a tracer needs the time they can measure it themselves. No need for evm to do this.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This changes the Pop method to assign the zero value before
reducing slice size. Doing so ensures the backing array does not
reference removed item values.
This PR drops the legacy receipt types, the freezer-migrate command and the startup check. The previous attempt #22852 at this failed because there were users who still had legacy receipts in their db, so it had to be reverted #23247. Since then we added a command to migrate legacy dbs #24028.
As of the last hardforks all users either must have done the migration, or used the --ignore-legacy-receipts flag which will stop working now.
This PR introduces a node scheme abstraction. The interface is only implemented by `hashScheme` at the moment, but will be extended by `pathScheme` very soon.
Apart from that, a few changes are also included which is worth mentioning:
- port the changes in the stacktrie, tracking the path prefix of nodes during commit
- use ethdb.Database for constructing trie.Database. This is not necessary right now, but it is required for path-based used to open reverse diff freezer
While investigating #22374, I noticed that the Sync operation of the
freezer does not take the table lock. It also doesn't call sync for all files
if there is an error with one of them. I doubt this will fix anything, but
didn't want to drop the fix on the floor either.
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
When the interpreter is configured to use extra-eips, this change makes it so that all the opcodes are deep-copied, to prevent accidental modification of the 'base' jumptable.
Closes: #26136
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR changes geth to read the eip1559 params from the chain config instead of the globals.
This way the parameters may be changed by forking the chain config code, without creating a large diff throughout the past and future usages of the parameters.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR ports a few changes from PBSS:
- Fix the snapshot generator waiter in case the generation is not even initialized
- Refactor db inspector for ancient store
This PR fixes a regression causing snapshots not to be generated in "geth --import" mode. It also fixes the geth export command to be truly readonly, and adds a new test for geth export.
This adds a
* core/vm, tests: optimized modexp + fuzzer
* common/math: modexp optimizations
* core/vm: special case base 1 in big modexp
* core/vm: disable fastexp
This changes the error message for mismatching chain ID to show
the given and expected value. Callers expecting this error must be
changed to use errors.Is.
* ethclient/gethclient: improve time-sensitive flaky test
* eth/catalyst: fix (?) flaky test
* core: stop blockchains in tests after use
* core: fix dangling blockchain instances
* core: rm whitespace
* eth/gasprice, eth/tracers, consensus/clique: stop dangling blockchains in tests
* all: address review concerns
* core: goimports
* eth/catalyst: fix another time-sensitive test
* consensus/clique: add snapshot test run function
* core: rename stop() to stopWithoutSaving()
Co-authored-by: Felix Lange <fjl@twurst.com>