Fixes#30254
It seems like the removed CreateAccount call is very old and not needed anymore.
After removing it, setting a sender that does not exist in the state doesn't seem to cause
an issue.
Currently, we have 3 flags to configure blob pool. However, we don't
read these flags and set the blob pool configuration in eth config
accordingly. This commit adds a function to check if these flags are
provided and set blob pool configuration based on them.
* all: add stateless verifications
* all: simplify witness and integrate it into live geth
---------
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* cmd/geth, ethdb/pebble: polish method naming and code comment
* implement db stat for pebble
* cmd, core, ethdb, internal, trie: remove db property selector
* cmd, core, ethdb: fix function description
---------
Co-authored-by: prpeh <prpeh@proton.me>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
enode.Node was recently changed to store a cache of endpoint information. The IP address in the cache is a netip.Addr. I chose that type over net.IP because it is just better. netip.Addr is meant to be used as a value type. Copying it does not allocate, it can be compared with ==, and can be used as a map key.
This PR changes most uses of Node.IP() into Node.IPAddr(), which returns the cached value directly without allocating.
While there are still some public APIs left where net.IP is used, I have converted all code used internally by p2p/discover to the new types. So this does change some public Go API, but hopefully not APIs any external code actually uses.
There weren't supposed to be any semantic differences resulting from this refactoring, however it does introduce one: In package p2p/netutil we treated the 0.0.0.0/8 network (addresses 0.x.y.z) as LAN, but netip.Addr.IsPrivate() doesn't. The treatment of this particular IP address range is controversial, with some software supporting it and others not. IANA lists it as special-purpose and invalid as a destination for a long time, so I don't know why I put it into the LAN list. It has now been marked as special in p2p/netutil as well.
Here we clean up internal uses of type discover.node, converting most code to use
enode.Node instead. The discover.node type used to be the canonical representation of
network hosts before ENR was introduced. Most code worked with *node to avoid conversions
when interacting with Table methods. Since *node also contains internal state of Table and
is a mutable type, using *node outside of Table code is prone to data races. It's also
cleaner not having to wrap/unwrap *enode.Node all the time.
discover.node has been renamed to tableNode to clarify its purpose.
While here, we also change most uses of net.UDPAddr into netip.AddrPort. While this is
technically a separate refactoring from the *node -> *enode.Node change, it is more
convenient because *enode.Node handles IP addresses as netip.Addr. The switch to package
netip in discovery would've happened very soon anyway.
The change to netip.AddrPort stops at certain interface points. For example, since package
p2p/netutil has not been converted to use netip.Addr yet, we still have to convert to
net.IP/net.UDPAddr in a few places.
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
* all: refactor so NewBlock(..) and WithBody(..) take a types.Body
* core: fixup comments, remove txs != receipts panic
* core/types: add empty withdrawls to body if len == 0
This change removes support for subscribing to pending logs.
"Pending logs" were always an odd feature, because it can never be fully reliable. When support for it was added many years ago, the intention was for this to be used by wallet apps to show the 'potential future token balance' of accounts, i.e. as a way of notifying the user of incoming transfers before they were mined. In order to generate the pending logs, the node must pick a subset of all public mempool transactions, execute them in the EVM, and then dispatch the resulting logs to API consumers.
Adds a flag `--trace.callframes` to t8n which will log info when entering or exiting a call frame in addition to the execution steps.
---------
Co-authored-by: Mario Vega <marioevz@gmail.com>
Here we add a Go API for running tracing plugins within the main block import process.
As an advanced user of geth, you can now create a Go file in eth/tracers/live/, and within
that file register your custom tracer implementation. Then recompile geth and select your tracer
on the command line. Hooks defined in the tracer will run whenever a block is processed.
The hook system is defined in package core/tracing. It uses a struct with callbacks, instead of
requiring an interface, for several reasons:
- We plan to keep this API stable long-term. The core/tracing hook API does not depend on
on deep geth internals.
- There are a lot of hooks, and tracers will only need some of them. Using a struct allows you
to implement only the hooks you want to actually use.
All existing tracers in eth/tracers/native have been rewritten to use the new hook system.
This change breaks compatibility with the vm.EVMLogger interface that we used to have.
If you are a user of vm.EVMLogger, please migrate to core/tracing, and sorry for breaking
your stuff. But we just couldn't have both the old and new tracing APIs coexist in the EVM.
---------
Co-authored-by: Matthieu Vachon <matthieu.o.vachon@gmail.com>
Co-authored-by: Delweng <delweng@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
This pull request introduces a database tool for inspecting the state history.
It can be used for either account history or storage slot history, within a
specific block range.
The state output format can be chosen either with
- the "rlp-encoded" values (those inserted into the merkle trie)
- the "rlp-decoded" value (the raw state value)
The latter one needs --raw flag.
This adds support for the Deneb beacon chain fork, and fork handling
in general, to the beacon chain light client implementation.
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
This changes makes it so that when `evm statetest` executes, regardless of whether `--json` is specified or not, the stateroot is printed on `stderr` as a `jsonl` line. This enables speedier execution of testcases in goevmlab, in cases where full execution op-by-op is not required.
Package filepath implements utility routines for manipulating filename paths in a way compatible with the target operating system-defined file paths.
Package path implements utility routines for manipulating slash-separated paths.
The path package should only be used for paths separated by forward slashes, such as the paths in URLs
Here we add a beacon chain light client for use by geth.
Geth can now be configured to run against a beacon chain API endpoint,
without pointing a CL to it. To set this up, use the `--beacon.api` flag. Information
provided by the beacon chain is verified, i.e. geth does not blindly trust the beacon
API endpoint in this mode. The root of trust are the beacon chain 'sync committees'.
The configured beacon API endpoint must provide light client data. At this time, only
Lodestar and Nimbus provide the necessary APIs.
There is also a standalone tool, cmd/blsync, which uses the beacon chain light client
to drive any EL implementation via its engine API.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
* miner: untangle miner
* miner: use common.hash instead of *types.header
* cmd/geth: deprecate --mine
* eth: get rid of most miner api
* console: get rid of coinbase in welcome message
* miner/stress: get rid of the miner stress test
* eth: get rid of miner.setEtherbase
* ethstats: remove miner and hashrate flags
* ethstats: remove miner and hashrate flags
* cmd: rename pendingBlockProducer to miner.pending.feeRecipient flag
* miner: use pendingFeeRecipient instead of etherbase
* miner: add mutex to protect the pending block
* miner: add mutex to protect the pending block
* eth: get rid of etherbase mentions
* miner: no need to lock the coinbase
* eth, miner: fix linter
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* eth: drop support for forward sync triggers and head block packets
* consensus, eth: enforce always merged network
* eth: fix tx looper startup and shutdown
* cmd, core: fix some tests
* core: remove notion of future blocks
* core, eth: drop unused methods and types
Improving two things here:
On hive, where we look at these tests, the Go code comment above the test
is not visible. When there is a failure, it's not obvious what the test is actually
expecting. I have converted the comments in to printed log messages to
explain the test more.
Second, I noticed that besu is failing some tests because it happens to request
a header when we want it to send transactions. Trying the minimal fix here to
serve the headers.
Co-authored-by: lightclient <14004106+lightclient@users.noreply.github.com>
As mentioned in #26621, the block index format for era1 is not in line with the regular era block index. This change modifies the index so all relative offsets are based against the beginning of the block index record.
* all: implement era format, add history importer/export
* internal/era/e2store: refactor e2store to provide ReadAt interface
* internal/era/e2store: export HeaderSize
* internal/era: refactor era to use ReadAt interface
* internal/era: elevate anonymous func to named
* cmd/utils: don't store entire era file in-memory during import / export
* internal/era: better abstraction between era and e2store
* cmd/era: properly close era files
* cmd/era: don't let defers stack
* cmd/geth: add description for import-history
* cmd/utils: better bytes buffer
* internal/era: error if accumulator has more records than max allowed
* internal/era: better doc comment
* internal/era/e2store: rm superfluous reader, rm superfluous testcases, add fuzzer
* internal/era: avoid some repetition
* internal/era: simplify clauses
* internal/era: unexport things
* internal/era,cmd/utils,cmd/era: change to iterator interface for reading era entries
* cmd/utils: better defer handling in history test
* internal/era,cmd: add number method to era iterator to get the current block number
* internal/era/e2store: avoid double allocation during write
* internal/era,cmd/utils: fix lint issues
* internal/era: add ReaderAt func so entry value can be read lazily
Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* internal/era: improve iterator interface
* internal/era: fix rlp decode of header and correctly read total difficulty
* cmd/era: fix rebase errors
* cmd/era: clearer comments
* cmd,internal: fix comment typos
---------
Co-authored-by: Martin Holst Swende <martin@swende.se>
This change makes use of uin256 to represent balance in state. It touches primarily upon statedb, stateobject and state processing, trying to avoid changes in transaction pools, core types, rpc and tracers.
When managing geth, it is sometimes desirable to do a partial wipe; deleting state but retaining freezer data. A partial wipe can be somewhat tricky to accomplish.
This change implements the ability to perform partial wipe by making it possible to run geth removedb non-interactive, using command line options instead.
Original problem was caused by #28595, where we made it so that as soon as we start to sync, the root of the disk layer is deleted. That is not wrong per se, but another part of the code uses the "presence of the root" as an init-check for the pathdb. And, since the init-check now failed, the code tried to re-initialize it which failed since a sync was already ongoing.
The total impact being: after a state-sync has begun, if the node for some reason is is shut down, it will refuse to start up again, with the error message: `Fatal: Failed to register the Ethereum service: waiting for sync.`.
This change also modifies how `geth removedb` works, so that the user is prompted for two things: `state data` and `ancient chain`. The former includes both the chaindb aswell as any state history stored in ancients.
---------
Co-authored-by: Martin HS <martin@swende.se>
Here we update the eth and snap protocol test suites with a new test chain,
created by the hivechain tool. The new test chain uses proof-of-stake. As such,
tests using PoW block propagation in the eth protocol are removed. The test suite
now connects to the node under test using the engine API in order to make it
accept transactions.
The snap protocol test suite has been rewritten to output test descriptions and
log requests more verbosely.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This change fixes a problem with our non-core binaries: evm, clef, bootnode.
First of all, they failed to convert from legacy loglevels 1 to 5, to the new slog loglevels -4 to 4.
Secondly, the logging was actually setup in the init phase, and then overridden in the main. This is not needed for evm, since it used the same flag name as the main geth verbosity. Better to let the flags/internal handle the logging init.