This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
* eth/tracers: add initial native prestate tracer
* fix balance hex
* handle prestate for tx from and to
* drop created contract from prestate
* fix sender balance
* use switch instead
Co-authored-by: Martin Holst Swende <martin@swende.se>
* minor fix
* lookup create2 account
* mv code around a bit
* check stackLen for create2
* fix transfer tx for js prestate tracer
* fix create2 addr
* track extcodehash in js prestate tracer
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth, miner: remove duplicated code
* eth/catalyst: remove unneeded code
* miner: keep update pending state even the Merge is happened
* eth, miner: rebase
* miner: fix tests
* eth, miner: address comments from marius
* miner: use empty zero randomness for pending blocks after the merge
* eth/catalyst: gofmt
* miner: add warning log for state recovery
* miner: ignore uncles for post-merge blocks
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* eth/catalyst: evict old payloads, type PayloadID
* eth/catalyst: added tracing info to engine api
* eth/catalyst: add test for create payload timestamps
* catalyst: better logs
* eth/catalyst: computePayloadId return style
* catalyst: add queue for payloads
* eth/catalyst: nitpicks
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core: implement eip-4399 random opcode
* core: make vmconfig threadsafe
* core: miner: pass vmConfig by value not reference
* all: enable 4399 by Rules
* core: remove diff (f)
* tests: set proper difficulty (f)
* smaller diff (f)
* eth/catalyst: nit
* core: make RANDOM a pointer which is only set post-merge
* cmd/evm/internal/t8ntool: fix t8n tracing of 4399
* tests: set difficulty
* cmd/evm/internal/t8ntool: check that baserules are london before applying the merge chainrules
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.
For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:
- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
the request is by number), and another for reading by hash (this latter one is sometimes
cached).
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
the request is by number), and another for reading the header itself. After this, it
also performes a hashing of the header, to ensure that the hash is what it expected. In
this PR, I instead move the logic for "give me a sequence of blocks" into the lower
layers, where the database can determine how and what to read from leveldb and/or
ancients.
There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.
The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.
Co-authored-by: Felix Lange <fjl@twurst.com>
* eth/tracers: Add support for REVERT in evmdis_tracer
* evm/tracers: Fix evmdis_tracer to use SELFDESTRUCT instead of SUICIDE
* eth/tracers: Regenerate tracer library
* core/vm: Move interpreter.ReadOnly check into the opcode implementations
Also remove the same check from the interpreter inner loop.
* core/vm: Remove obsolete operation.writes flag
* core/vm: Capture fault states in logger
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/vm: Remove panic added for testing
Co-authored-by: Martin Holst Swende <martin@swende.se>
* all: work for eth1/2 transtition
* consensus/beacon, eth: change beacon difficulty to 0
* eth: updates
* all: add terminalBlockDifficulty config, fix rebasing issues
* eth: implemented merge interop spec
* internal/ethapi: update to v1.0.0.alpha.2
This commit updates the code to the new spec, moving payloadId into
it's own object. It also fixes an issue with finalizing an empty blockhash.
It also properly sets the basefee
* all: sync polishes, other fixes + refactors
* core, eth: correct semantics for LeavePoW, EnterPoS
* core: fixed rebasing artifacts
* core: light: performance improvements
* core: use keyed field (f)
* core: eth: fix compilation issues + tests
* eth/catalyst: dbetter error codes
* all: move Merger to consensus/, remove reliance on it in bc
* all: renamed EnterPoS and LeavePoW to ReachTDD and FinalizePoS
* core: make mergelogs a function
* core: use InsertChain instead of InsertBlock
* les: drop merger from lightchain object
* consensus: add merger
* core: recoverAncestors in catalyst mode
* core: fix nitpick
* all: removed merger from beacon, use TTD, nitpicks
* consensus: eth: add docstring, removed unnecessary code duplication
* consensus/beacon: better comment
* all: easy to fix nitpicks by karalabe
* consensus/beacon: verify known headers to be sure
* core: comments
* core: eth: don't drop peers who advertise blocks, nitpicks
* core: never add beacon blocks to the future queue
* core: fixed nitpicks
* consensus/beacon: simplify IsTTDReached check
* consensus/beacon: correct IsTTDReached check
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* all: mv loggers to eth/tracers
* core/vm: minor
* eth/tracers: tmp comment out testStoreCapture
* eth/tracers: uncomment and fix logger test
* eth/tracers: simplify test
* core/vm: re-add license
* core/vm: minor
* rename LogConfig to Config
Some benchmarks in eth/filters were not good: they weren't reproducible, relying on geth chaindata to be present.
Another one was rejected because the receipt was lacking a backing transcation.
The p2p simulation benchmark had a lot of the warnings below, due to the framework calling both
Stop() and Close(). Apparently, the simulated adapter is the only implementation which has a Close(),
and there is no need to call both Stop and Close on it.
This doesn't fix all go-critic warnings, just the most serious ones.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth,rpc: allow for flag configured timeouts for eth_call
* lint: account for package-local import order
* cr: rename `rpc.calltimeout` to `rpc.evmtimeout`
This removes some code:
- The clique engine calculated the snapshot twice when verifying headers/blocks.
- The method GetBlockHashesFromHash in Header/Block/Lightchain was only used by tests. It
is now removed from the API.
- The method GetTdByHash internally looked up the number before calling GetTd(hash, num).
In many cases, callers already had the number, and used this method just because it has a
shorter name. I have removed the method to make the API surface smaller.
This fixes a data race on worker.current by moving the call to StopPrefetcher
into the main loop.
The commit also contains fixes for two other races in unit tests of unrelated packages.
Fixes#23681
After the fix I get the address 0x6d6d02e83c4ced98204e20126acf27e9d87b8af2 for the
tx mentioned in the ticket, which agrees with etherscan.
The test did not synchronize with per-case goroutines, and thus didn't notice
that some tests were just hanging. This change adds missing synchronization
and fixes the broken tests.
This PR fixes an issue in traceChain, where the statedb Commit operation was performed asynchronously with dereference-operations agains the underlying trie.Database instance. Due to how the reference counting works within the trie database (where parent count is recursively updated when new parents are added), doing dereferencing in the middle of Commit can cause the refcount to become wrong, leading to an inconsistent state.
This was fixed by doing Commit/Deref from the same routine.
* core/types: rm extranous check in test
* core/rawdb: add lightweight types for block logs
* core/rawdb,eth: use lightweight accessor for log filtering
* core/rawdb: add bench for decoding into rlpLogs
This PR implements a new debug method, which I've talked briefly about to some other client developers. It allows the caller to obtain the intermediate state roots for a block (which might be either a canon block or a 'bad' block).
This change introduces 2 new optional methods; `enter()` and `exit()` for js tracers, and makes `step()` optiona. The two new methods are invoked when entering and exiting a call frame (but not invoked for the outermost scope, which has it's own methods). Currently these are the data fields passed to each of them:
enter: type (opcode), from, to, input, gas, value
exit: output, gasUsed, error
The PR also comes with a re-write of the callTracer. As a backup we keep the previous tracing script under the name `callTracerLegacy`. Behaviour of both tracers are equivalent for the most part, although there are some small differences (improvements), where the new tracer is more correct / has more information.
This adds a check to verify that a sender-account does not have code, which means that the codehash is either `emptyCodeHash` _OR_ not present. The latter occurs IFF the sender did not previously exist, a situation which can only occur with zero cost gasprices.
* internal/ethapi/api: use hexutil.uint for blockCount parameter instead of int for feeHistory
* return hex value for oldestBlock instead of number
* return uint64 from oracle.resolveBlockRange
* eth/gasprice: fixed test
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
When processing a transaction with London fork rules, EIP-1559 mandates
checking that the sender must have sufficient balance to cover gas * gasFeeCap.
In the EIP's pseudocode, this check happens after the value transferred by the
transaction has already been deducted. However, in go-ethereum, the balance
has not yet been updated when the check happens, and therefore needs to be
added explicitly.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Some tests take quite some time during exit, which I think causes
some appveyor fails like this:
https://ci.appveyor.com/project/ethereum/go-ethereum/builds/39511210/job/xhom84eg2e4uulq3
One of the things that seem to take time during exit is waiting
(up to 100ms) for the syncbloom to close. This PR changes it to use
a channel, instead of looping with a 100ms wait.
This also includes some unrelated changes improving the reliability of
eth/fetcher tests, which fail a lot because they are time-dependent.
* core,eth/tracers: make isPrecompiled dependent on HF
* eth/tracers: use keys when constructing chain config struct
* eth/tracers: dont initialize activePrecompiles with random value
This change increases the cache size from 64 to 256 Mb for block bodies.
Benchmarks have shown this to be one bottleneck when trying to achieve
higher download speeds.
The commit also includes a minor optimization for header inserts in package
core: previously, the presence of headers in the database was checked for
every header before writing it. With the change, if one header fails the
presence check, all subsequent headers are also assumed to be missing.
This is an improvement because in practice, the headers are almost always
missing during sync.
* accounts/abi/bind: fix bounded contracts and sim backend for 1559
* accounts/abi/bind, ethclient: don't rely on chain config for gas prices
* all: enable London for all internal tests
* les: get receipt type info in les tests
* les: fix weird test
Co-authored-by: Martin Holst Swende <martin@swende.se>
* internal/ethapi: add baseFee to RPCMarshalHeader
* internal/ethapi: add FeeCap, Tip and correct GasPrice to EIP-1559 RPCTransaction results
* core,eth,les,internal: add support for tip estimation in gas price oracle
* internal/ethapi,eth/gasprice: don't suggest tip larger than fee cap
* core/types,internal: use correct eip1559 terminology for json marshalling
* eth, internal/ethapi: fix rebase problems
* internal/ethapi: fix rpc name of basefee
* internal/ethapi: address review concerns
* core, eth, internal, les: simplify gasprice oracle (#25)
* core, eth, internal, les: simplify gasprice oracle
* eth/gasprice: fix typo
* internal/ethapi: minor tweak in tx args
* internal/ethapi: calculate basefee for pending block
* internal/ethapi: fix panic
* internal/ethapi, eth/tracers: simplify txargs ToMessage
* internal/ethapi: remove unused param
* core, eth, internal: fix regressions wrt effective gas price in the evm
* eth/gasprice: drop weird debug println
* internal/jsre/deps: hack in 1559 gas conversions into embedded web3
* internal/jsre/deps: hack basFee to decimal conversion
* internal/ethapi: init feecap and tipcap for legacy txs too
* eth, graphql, internal, les: fix gas price suggestion on all combos
* internal/jsre/deps: handle decimal tipcap and feecap
* eth, internal: minor review fixes
* graphql, internal: export max fee cap RPC endpoint
* internal/ethapi: fix crash in transaction_args
* internal/ethapi: minor refactor to make the code safer
Co-authored-by: Ryan Schneider <ryanleeschneider@gmail.com>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: gary rong <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
There are two transaction parameter structures defined in
the codebase, although for different purposes. But most of
the parameters are shared. So it's nice to reduce the code
duplication by merging them together.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This removes the error log message that says
Ethereum peer removal failed ... err="peer not registered"
The error happened because removePeer was called multiple
times: once to disconnect the peer, and another time when the
handler exited. With this change, removePeer now has the sole
purpose of disconnecting the peer. Unregistering happens exactly
once, when the handler exits.
* core/types, miner: create TxWithMinerFee wrapper, add EIP-1559 support to TransactionsByMinerFeeAndNonce
miner: set base fee when creating a new header, handle gas limit, log miner fees
* all: rename to NewTransactionsByPriceAndNonce
* core/types, miner: rename to NewTransactionsByPriceAndNonce + EffectiveTip
miner: activate 1559 for testGenerateBlockAndImport tests
* core,miner: revert naming to TransactionsByPriceAndTime
* core/types/transaction: update effective tip calculation logic
* miner: update aleut to london
* core/types/transaction_test: use correct signer for 1559 txs + add back sender check
* miner/worker: calculate gas target from gas limit
* core, miner: fix block gas limits for 1559
Co-authored-by: Ansgar Dietrichs <adietrichs@gmail.com>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
This change extracts the peer QoS tracking logic from eth/downloader, moving
it into the new package p2p/msgrate. The job of msgrate.Tracker is determining
suitable timeout values and request sizes per peer.
The snap sync scheduler now uses msgrate.Tracker instead of the hard-coded 15s
timeout. This should make the sync work better on network links with high latency.
This is the initial implementation of EIP-1559 in packages core/types and core.
Mining, RPC, etc. will be added in subsequent commits.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.
The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.
While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
* eth/protocols/snap: generate storage trie from full dirty snap data
* eth/protocols/snap: get rid of some more dead code
* eth/protocols/snap: less frequent logs, also log during trie generation
* eth/protocols/snap: implement dirty account range stack-hashing
* eth/protocols/snap: don't loop on account trie generation
* eth/protocols/snap: fix account format in trie
* core, eth, ethdb: glue snap packets together, but not chunks
* eth/protocols/snap: print completion log for snap phase
* eth/protocols/snap: extended tests
* eth/protocols/snap: make testcase pass
* eth/protocols/snap: fix account stacktrie commit without defer
* ethdb: fix key counts on reset
* eth/protocols: fix typos
* eth/protocols/snap: make better use of delivered data (#44)
* eth/protocols/snap: make better use of delivered data
* squashme
* eth/protocols/snap: reduce chunking
* squashme
* eth/protocols/snap: reduce chunking further
* eth/protocols/snap: break out hash range calculations
* eth/protocols/snap: use sort.Search instead of looping
* eth/protocols/snap: prevent crash on storage response with no keys
* eth/protocols/snap: nitpicks all around
* eth/protocols/snap: clear heal need on 1-chunk storage completion
* eth/protocols/snap: fix range chunker, add tests
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* trie: fix test API error
* eth/protocols/snap: fix some further liter issues
* eth/protocols/snap: fix accidental batch reuse
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/protocols, prp/tracker: add support for req/rep rtt tracking
* p2p/tracker: sanity cap the number of pending requests
* pap/tracker: linter <3
* p2p/tracker: disable entire tracker if no metrics are enabled
This change adds the --catalyst flag, enabling an RPC API for eth2 integration.
In this initial version, catalyst mode also disables all peer-to-peer networking.
Co-authored-by: Mikhail Kalinin <noblesse.knight@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
* all: add thousandths separators for big numbers on log messages
* p2p/sentry: drop accidental file
* common, log: add fast number formatter
* common, eth/protocols/snap: simplifty fancy num types
* log: handle nil big ints
* core/vm: implement AccessListTracer
* eth: implement debug.createAccessList
* core/vm: fixed nil panics in accessListTracer
* eth: better error messages for createAccessList
* eth: some fixes on CreateAccessList
* eth: allow for provided accesslists
* eth: pass accesslist by value
* eth: remove created acocunt from accesslist
* core/vm: simplify access list tracer
* core/vm: unexport accessListTracer
* eth: return best guess if al iteration times out
* eth: return best guess if al iteration times out
* core: docstring, unexport methods
* eth: typo
* internal/ethapi: move createAccessList to eth package
* internal/ethapi: remove reexec from createAccessList
* internal/ethapi: break if al is equal to last run, not if gas is equal
* internal/web3ext: fixed arguments
* core/types: fixed equality check for accesslist
* core/types: no hardcoded vals
* core, internal: simplify access list generation, make it precise
* core/vm: fix typo
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This removes the duplicated definition of eth_chainID
in package eth and updates the definition in internal/ethapi
to treat chain ID as a bigint.
Co-authored-by: Felix Lange <fjl@twurst.com>
The PR implements the --miner.notify.full flag that enables full pending block
notifications. When this flag is used, the block notifications sent to mining
endpoints contain the complete block header JSON instead of a work package
array.
Co-authored-by: AlexSSD7 <alexandersadovskyi7@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Fixes the CaptureStart api to include the EVM, thus being able to set the statedb early on. This pr also exposes the struct we used internally in the interpreter to encapsulate the contract, mem, stack, rstack, so we pass it as a single struct to the tracer, and removes the error returns on the capture methods.
This adds support for EIP-2718 typed transactions as well as EIP-2930
access list transactions (tx type 1). These EIPs are scheduled for the
Berlin fork.
There very few changes to existing APIs in core/types, and several new APIs
to deal with access list transactions. In particular, there are two new
constructor functions for transactions: types.NewTx and types.SignNewTx.
Since the canonical encoding of typed transactions is not RLP-compatible,
Transaction now has new methods for encoding and decoding: MarshalBinary
and UnmarshalBinary.
The existing EIP-155 signer does not support the new transaction types.
All code dealing with transaction signatures should be updated to use the
newer EIP-2930 signer. To make this easier for future updates, we have
added new constructor functions for types.Signer: types.LatestSigner and
types.LatestSignerForChainID.
This change also adds support for the YoloV3 testnet.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Ryan Schneider <ryanleeschneider@gmail.com>
This PR adds a more CLI flag, so that the les-server can serve light clients even the local node is not synced yet.
This functionality is needed in some testing environments(e.g. hive). After launching the les server, no more blocks will be imported so the node is always marked as "non-synced".
This PR prevents users from submitting transactions without EIP-155 enabled. This behaviour can be overridden by specifying the flag --rpc.allow-unprotected-txs=true.
This PR optimizes the broadcast loop. Instead of iterating twice through a given set of transactions to weed out which peers have and which do not have a tx, to send/announce transactions, we do it only once.
Prevents a situation where we (not running snap) connects with a peer running snap, and get stalled waiting for snap registration to succeed (which will never happen), which cause a waitgroup wait to halt shutdown
This moves the eth config definition into a separate package, eth/ethconfig.
Packages eth and les can now import this common package instead of
importing eth from les, reducing dependencies.
Co-authored-by: Felix Lange <fjl@twurst.com>
The PR makes use of the stacktrie, which is is more lenient on resource consumption, than the regular trie, in cases where we only need it for DeriveSha
* remove uneeded convertion type
* remove redundant type in composite literal
* omit explicit type where implicit
* remove unused redundant parenthesis
* remove redundant import alias duktape
Removes the yolov2 definition, adds yolov3, including EIP-2565. This PR also disables some of the erroneously generated blockchain and statetests, and adds the new genesis hash + alloc for yolov3.
This PR disables the CLI switches for yolo, since it's not complete until we merge support for 2930.
This moves the tracing RPC API implementation to package eth/tracers.
By doing so, package eth no longer depends on tracing and the duktape JS engine.
The change also enables tracing using the light client. All tracing methods work with the
light client, but it's a lot slower compared to using a full node.
This PR introduces a new config field SyncFromCheckpoint for light client.
In some special scenarios, it's required to start synchronization from some
arbitrary checkpoint or even from the scratch. So this PR offers this
flexibility to users so that the synchronization start point can be configured.
There are two relevant configs: SyncFromCheckpoint and Checkpoint.
- If the SyncFromCheckpoint is true, the light client will try to sync from the
specified checkpoint.
- If the Checkpoint is not configured, then the light client will sync from the
scratch(from the latest header if the database is not empty)
Additional notes: these two configs are not visible in the CLI flags but only
accessable in the config file.
Example Usage:
[Eth]
SyncFromCheckpoint = true
[Eth.Checkpoint]
SectionIndex = 100
SectionHead = "0xabc"
CHTRoot = "0xabc"
BloomRoot = "0xabc"
PS. Historical checkpoint can be retrieved from the synced full node or light
client via les_getCheckpoint API.
This changes the chainID RPC method to return an error when EIP-155 is not yet
active at the current block height. It used to simply return zero in this case, but
that's confusing.
During the snap and eth refactor, the net_version rpc call was falsely deprecated.
This restores the net_version RPC handler as most eth2 nodes and other software
depend on it.
This commit splits the eth package, separating the handling of eth and snap protocols. It also includes the capability to run snap sync (https://github.com/ethereum/devp2p/blob/master/caps/snap.md) , but does not enable it by default.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR implements unclean shutdown marker. Every time geth boots, it adds a timestamp to a list of timestamps in the database. This list is capped at 10. At a clean shutdown, the timestamp is removed again.
Thus, when geth exits unclean, the marker remains, and at boot up we show the most recent unclean shutdowns to the user, which makes it easier to diagnose root-causes to certain problems.
Co-authored-by: Nagy Salem <me@muhnagy.com>
In miner/worker.go, there are two goroutine using channel w.newWorkCh: newWorkerLoop() sends to this channel, and mainLoop() receives from this channel. Only the receive operation is in a select.
However, w.exitCh may be closed by another goroutine. This is fine for the receive since receive is in select, but if the send operation is blocking, then it will block forever. This commit puts the send in a select, so it won't block even if w.exitCh is closed.
Similarly, there are two goroutines using channel errc: the parent that runs the test receives from it, and the child created at line 573 sends to it. If the parent goroutine exits too early by calling t.Fatalf() at line 614, then the child goroutine will be blocked at line 574 forever. This commit adds 1 buffer to errc. Now send will not block, and receive is not influenced because receive still needs to wait for the send.
* all: core: split vm.Config into BlockConfig and TxConfig
* core: core/vm: reset EVM between tx in block instead of creating new
* core/vm: added docs
core/types: use stacktrie for derivesha
trie: add stacktrie file
trie: fix linter
core/types: use stacktrie for derivesha
rebased: adapt stacktrie to the newer version of DeriveSha
Co-authored-by: Martin Holst Swende <martin@swende.se>
More linter fixes
review feedback: no key offset for nodes converted to hashes
trie: use EncodeRLP for full nodes
core/types: insert txs in order in derivesha
trie: tests for derivesha with stacktrie
trie: make stacktrie use pooled hashers
trie: make stacktrie reuse tmp slice space
trie: minor polishes on stacktrie
trie/stacktrie: less rlp dancing
core/types: explain the contorsions in DeriveSha
ci: fix goimport errors
trie: clear mem on subtrie hashing
squashme: linter fix
stracktrie: use pooling, less allocs (#3)
trie: in-place hex prefix, reduce allocs and add rawNode.EncodeRLP
Reintroduce the `[]node` method, add the missing `EncodeRLP` implementation for `rawNode` and calculate the hex prefix in place.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes how the downloader works, a little bit. Previously, when block sync started,
we immediately started filling up to 8192 blocks. Usually this is fine, blocks are small
in the early numbers. The threshold then is lowered as we measure the size of the blocks
that are filled.
However, if the node is shut down and restarts syncing while we're in a heavy segment,
that might be bad. This PR introduces a more conservative initial threshold of 2K blocks
instead.
* "Downloader queue stats" is now a DEBUG information
I think this info is more a DEBUG related information then an INFO. If it must remains an INFO, maybe it can be slow down to one time every 5 minutes or so.
* Update queue.go
"Downloader queue stats" information is now provided once every minute instead of once every 10 seconds.
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.
There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.
There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.
The --graphql.port and --graphql.addr flags are no longer available.
* init
notes
removed some mentions of eth62, bumped protocol err too old to >=63
* remove sanity checks and bump supported protocol version up to 63
* remove 62 tests, still need to add 65
* remove 65 tests
* eth/downloader: refactor downloader + queue
downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache
downloader: more accurate deliverytime calculation, less mem overhead in state requests
downloader/queue: increase underlying buffer of results, new throttle mechanism
eth/downloader: updates to tests
eth/downloader: fix up some review concerns
eth/downloader/queue: minor fixes
eth/downloader: minor fixes after review call
eth/downloader: testcases for queue.go
eth/downloader: minor change, don't set progress unless progress...
eth/downloader: fix flaw which prevented useless peers from being dropped
eth/downloader: try to fix tests
eth/downloader: verify non-deliveries against advertised remote head
eth/downloader: fix flaw with checking closed-status causing hang
eth/downloader: hashing avoidance
eth/downloader: review concerns + simplify resultcache and queue
eth/downloader: add back some locks, address review concerns
downloader/queue: fix remaining lock flaw
* eth/downloader: nitpick fixes
* eth/downloader: remove the *2*3/4 throttling threshold dance
* eth/downloader: print correct throttle threshold in stats
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This change introduces garbage collection for the light client. Historical
chain data is deleted periodically. If you want to disable the GC, use
the --light.nopruning flag.
This fixes two issues with state sync restarts:
When sync restarts with a new root, some peers can have in-flight requests.
Since all peers with active requests were marked idle when exiting sync,
the new sync would schedule more requests for those peers. When the
response for the earlier request arrived, the new sync would reject it and
mark the peer idle again, rendering the peer useless until it disconnected.
The other issue was that peers would not be marked idle when they had
delivered a response, but the response hadn't been processed before
restarting the state sync. This also made the peer useless because it
would be permanently marked busy.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR reduces the bandwidth used by the light client to compute the
recommended gas price. The current mechanism for suggesting the price is:
- retrieve recent 20 blocks
- get the lowest gas price of these blocks
- sort the price array and return the middle(60%) one
This works for full nodes, which have all blocks available locally.
However, this is very expensive for the light client because the light
client needs to retrieve block bodies from the network.
The PR changes the default options for light client. With the new config,
the light client only retrieves the two latest blocks, but in order to
collect more sample transactions, the 3 lowest prices are collected from
each block.
This PR also changes the behavior for empty blocks. If the block is empty,
the lastest price is reused for sampling.
* eth/downloaded: fixed datarace between synchronize and Progress
There was a race condition between `downloader.synchronize()` and `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders`
This PR changes the behavior of the downloader a bit.
Previously the functions `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders` read the syncMode anew within their loops. Now they read the syncMode at the start of their function and don't change it during their runtime.
* eth/downloaded: comment
* eth/downloader: added comment
* core, crypto: various allocation savings regarding tx handling
* core: reduce allocs for gas price comparison
This change reduces the allocations needed for comparing different transactions to each other.
A call to `tx.GasPrice()` copies the gas price as it has to be safe against modifications and
also needs to be threadsafe. For comparing and ordering different transactions we don't need
these guarantees
* core: added tx.GasPriceIntCmp for comparison without allocation
adds a method to remove unneeded allocation in comparison to tx.gasPrice
* core/types: pool legacykeccak256 objects in rlpHash
rlpHash is by far the most used function in core that allocates a legacyKeccak256 object on each call.
Since it is so widely used it makes sense to add pooling here so we relieve the GC.
On my machine these changes result in > 100 MILLION less allocations and > 30 GB less allocated memory.
* reverted some changes
* reverted some changes
* trie: use crypto.KeccakState instead of replicating code
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/downloader tests: fix spurious failing test due to race between receipts/headers
* miner tests: fix travis failure on arm64
* eth/downloader: tests - store td in ancients too
* core/vm: use fixed uint256 library instead of big
* core/vm: remove intpools
* core/vm: upgrade uint256, fixes uint256.NewFromBig
* core/vm: use uint256.Int by value in Stack
* core/vm: upgrade uint256 to v1.0.0
* core/vm: don't preallocate space for 1024 stack items (only 16)
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR makes use of go 1.13 error handling, wrapping errors and using
errors.Is to check a wrapped root-cause. It also removes the travis
builders for go 1.11 and go 1.12.
This adds a new API method on core.BlockChain to allow interrupting
running data inserts, and calls the method before shutting down the
downloader.
The BlockChain interrupt checks are now done through a method instead
of inlining the atomic load everywhere. There is no loss of efficiency from
this and it makes the interrupt protocol a lot clearer because the check is
defined next to the method that sets the flag.
This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.
Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.
This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
* cmd, core, eth: init tx lookup in background
* core/rawdb: tiny log fixes to make it clearer what's happening
* core, eth: fix rebase errors
* core/rawdb: make reindexing less generic, but more optimal
* rlp: implement rlp list iterator
* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data
* core/rawdb, cmd/utils: fix review concerns
* cmd/utils: fix merge issue
* core/rawdb: add some log formatting polishes
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* all: seperate consensus error and evm internal error
There are actually two types of error will be returned when
a tranaction/message call is executed: (a) consensus error
(b) evm internal error. The former should be converted to
a consensus issue, e.g. The sender doesn't enough asset to
purchase the gas it specifies. The latter is allowed since
evm itself is a blackbox and internal error is allowed to happen.
This PR emphasizes the difference by introducing a executionResult
structure. The evm error is embedded inside. So if any error
returned, it indicates consensus issue happens.
And also this PR improve the `EstimateGas` API to return the concrete
revert reason if the transaction always fails
* all: polish
* accounts/abi/bind/backends: add tests
* accounts/abi/bind/backends, internal: cleanup error message
* all: address comments
* core: fix lint
* accounts, core, eth, internal: address comments
* accounts, internal: resolve revert reason if possible
* accounts, internal: address comments
This new API allows reading accounts and their content by address range.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
* eth: improve shutdown synchronization
Most goroutines started by eth.Ethereum didn't have any shutdown sync at
all, which lead to weird error messages when quitting the client.
This change improves the clean shutdown path by stopping all internal
components in dependency order and waiting for them to actually be
stopped before shutdown is considered done. In particular, we now stop
everything related to peers before stopping 'resident' parts such as
core.BlockChain.
* eth: rewrite sync controller
* eth: remove sync start debug message
* eth: notify chainSyncer about new peers after handshake
* eth: move downloader.Cancel call into chainSyncer
* eth: make post-sync block broadcast synchronous
* eth: add comments
* core: change blockchain stop message
* eth: change closeBloomHandler channel type
Prior to this change, eth_call changed the balance of the sender account in the
EVM environment to 2^256 wei to cover the gas cost of the call execution.
We've had this behavior for a long time even though it's super confusing.
This commit sets the default call gasprice to zero instead of updating the balance,
which is better because it makes eth_call semantics less surprising. Removing
the built-in balance assignment also makes balance overrides work as expected.
* node: expose config in service context
* eth: integrate p2p/dnsdisc
* cmd/geth: add some DNS flags
* eth: remove DNS URLs
* cmd/utils: configure DNS names for testnets
* params: update DNS URLs
* cmd/geth: configure mainnet DNS
* cmd/utils: rename DNS flag and fix flag processing
* cmd/utils: remove debug print
* node: fix test
* travis: Enable ARM support
* Include fixes from 20039
* Add a trace to debug the invalid lookup issue
* Try increasing the timeout to see if the arm test passes
* Investigate the resolver issue
* Increase arm64 timeout for clique test
* increase timeout in tests for arm64
* Only test the failing tests
* Review feedback: don't export epsilon
* Remove investigation tricks+include fjl's feeback
* Revert the retry ahead of using the mock resolver
* Fix rebase errors
* core/evm, contracts: avoid copying memory for input in calls + make ecrecover not modify input buffer
* core/vm: optimize mstore a bit
* core/vm: change Get -> GetCopy in vm memory access
When we flush a batch of trie nodes into database during the state
sync, we should guarantee that all children should be flushed before
parent.
Actually the trie nodes commit order is strict by: children -> parent.
But when we flush all ready nodes into db, we don't need the order
anymore since
(1) they are all ready nodes (no more dependency)
(2) underlying database provides write atomicity
The precompile at 0x09 wraps the BLAKE2b F compression function:
https://tools.ietf.org/html/rfc7693#section-3.2
The precompile requires 6 inputs tightly encoded, taking exactly 213
bytes, as explained below.
- `rounds` - the number of rounds - 32-bit unsigned big-endian word
- `h` - the state vector - 8 unsigned 64-bit little-endian words
- `m` - the message block vector - 16 unsigned 64-bit little-endian words
- `t_0, t_1` - offset counters - 2 unsigned 64-bit little-endian words
- `f` - the final block indicator flag - 8-bit word
[4 bytes for rounds][64 bytes for h][128 bytes for m][8 bytes for t_0]
[8 bytes for t_1][1 byte for f]
The boolean `f` parameter is considered as `true` if set to `1`.
The boolean `f` parameter is considered as `false` if set to `0`.
All other values yield an invalid encoding of `f` error.
The precompile should compute the F function as specified in the RFC
(https://tools.ietf.org/html/rfc7693#section-3.2) and return the updated
state vector `h` with unchanged encoding (little-endian).
See EIP-152 for details.
* eth: chain config (genesis + fork) ENR entry
* core/forkid, eth: protocol independent fork ID, update to CRC32 spec
* core/forkid, eth: make forkid a struct, next uint64, enr struct, RLP
* core/forkid: change forkhash rlp encoding from int to [4]byte
* eth: fixup eth entry a bit and update it every block
* eth: fix lint
* eth: fix crash in ethclient tests
This PR adds some hardening in the lower levels of the protocol stack, to bail early on invalid data. Primarily, attacks that this PR protects against are on the "annoyance"-level, which would otherwise write a couple of megabytes of data into the log output, which is a bit resource intensive.
* core/state, cmd/geth: streaming json output dump cmd + optional code+storage
* dump: add option to continue even if preimages are missing
* core, evm: lint nits
* cmd: use local flags for dump, omit empty code/storage
* core/state: fix state dump test
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
* core, eth, trie: bloom filter for trie node dedup during fast sync
* eth/downloader, trie: address review comments
* core, ethdb, trie: restart fast-sync bloom construction now and again
* eth/downloader: initialize fast sync bloom on startup
* eth: reenable eth/62 until we properly remove it
This change makes getBalance, getCode, getStorageAt, getProof,
call, getTransactionCount return an error if the block number in
the request doesn't exist. getHeaderByNumber still returns null
for missing headers.
* cmd, eth, miner: disable advance sealing if user require
* cmd, console, miner, les, eth: wrap the miner config
* eth: remove todo
* cmd, miner: revert noadvance flag
The reason for this is: if the transaction execution is even longer
than block time, then this kind of transactions is DoS attack.
* core/vm: remove function call for stack validation from evm runloop
* core/vm: separate gas calc into static + dynamic
* core/vm: optimize push1
* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER
* core/vm: use generic error message for jump/jumpi, to avoid string interpolation
* testdata: fix tests for new error message
* core/vm: use 64-bit memory calculations
* core/vm: fix error in memory calculation
* core/vm: address review concerns
* core/vm: avoid unnecessary use of big.Int:BitLen()
This change
- implements concurrent LES request serving even for a single peer.
- replaces the request cost estimation method with a cost table based on
benchmarks which gives much more consistent results. Until now the
allowed number of light peers was just a guess which probably contributed
a lot to the fluctuating quality of available service. Everything related
to request cost is implemented in a single object, the 'cost tracker'. It
uses a fixed cost table with a global 'correction factor'. Benchmark code
is included and can be run at any time to adapt costs to low-level
implementation changes.
- reimplements flowcontrol.ClientManager in a cleaner and more efficient
way, with added capabilities: There is now control over bandwidth, which
allows using the flow control parameters for client prioritization.
Target utilization over 100 percent is now supported to model concurrent
request processing. Total serving bandwidth is reduced during block
processing to prevent database contention.
- implements an RPC API for the LES servers allowing server operators to
assign priority bandwidth to certain clients and change prioritized
status even while the client is connected. The new API is meant for
cases where server operators charge for LES using an off-protocol mechanism.
- adds a unit test for the new client manager.
- adds an end-to-end test using the network simulator that tests bandwidth
control functions through the new API.
Simplifies the transaction presense check to use a function to
determine if the transaction is present in the block provided
to trace, which originally had a redundant parenthesis and used
a `exist` flag to dictate control flow.
This changes default location of the data directory to use the LOCALAPPDATA
environment variable, resolving issues with remote home directories an improving
compatibility with Cygwin.
Fixes#2239Fixes#2237Fixes#16437
* cmd, eth: Added in the flag to step geth once sync based on input
* cmd, eth: 16400 Add an option to stop geth once in sync.
* cmd: 16400 Add an option to stop geth once in sync. WIP
* cmd/geth/main, les/fletcher: added in light mode support
* cmd/geth/main, les/fletcher: Cleaned Comments and code for light mode
* cmd: 16400 Fixed formatting issue and cleaned code
* cmd, eth, les: 16400 Fixed formatting issues
* cmd, eth, les: Performed gofmt to update formatting
* cmd, eth, les: Fixed bugs resulting formatting
* cmd/geth, eth/, les: switched to downloader event
* eth: Fixed styling and gen_config
* eth/: Fix nil error in config file
* cmd/geth: Updated countdown log
* les/fetcher.go: Removed depcreated channel
* eth/downloader.go: Removed deprecated select
* cmd/geth, cmd/utils: Fixed minor issues
* eth: Reverted config files to proper format
* eth: Fixed typo in config file
* cmd/geth, eth/down: Updated code to use header time stamp
* eth/downloader: Changed the time threshold to 10 minutes
* cmd/geth, eth/downloader: Updated downloading event to pass latest header
* cmd/geth: Updated main to use right timer object
* cmd/geth: Removed unused failed event
* cmd/geth: added in correct time field with type assertion
* cmd/geth, cmd/utils: Updated flag to use boolean
* cmd/geth, cmd/utils, eth/downloader: Cleaned up code based on recommendations
* cmd/geth: Removed unneeded import
* cmd/geth, eth/downloader: fixed event field and suggested changes
* cmd/geth, cmd/utils: Updated flag and linting issue
* geth/core/eth: implement constantinople override flag
* les: implemnent constantinople override flag for les clients
* cmd/geth, eth, les: fix typo, move flag to experimentals
* Rejects peers that respond with a different hash for any of the passed in block numbers.
* Meant for emergency situations when the network forks unexpectedly.
Until this commit, when sending an RPC request that called `NewEVM`, a blank `vm.Config`
would be taken so as to set some options, based on the default configuration. If some extra
configuration switches were passed to the blockchain, those would be ignored.
This PR adds a function to get the config from the blockchain, and this is what is now used
for RPC calls.
Some subsequent changes need to be made, see https://github.com/ethereum/go-ethereum/pull/17955#pullrequestreview-182237244
for the details of the discussion.
* core: speed up GenerateChain
Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.
* eth/downloader: speed up tests by generating chain only once
This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
Interpreter initialization is left to the PRs implementing them.
Options for external interpreters are passed after a colon in the
`--vm.ewasm` and `--vm.evm` switches.
Makes Interface interface a bit more stateless and abstract.
Obviously this change is dictated by EVMC design. The EVMC tries to keep the responsibility for EVM features totally inside the VMs, if feasible. This makes VM "stateless" because VM does not need to pass any information between executions, all information is included in parameters of the execute function.
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
* consensus/ethash: start remote ggoroutine to handle remote mining
* consensus/ethash: expose remote miner api
* consensus/ethash: expose submitHashrate api
* miner, ethash: push empty block to sealer without waiting execution
* consensus, internal: add getHashrate API for ethash
* consensus: add three method for consensus interface
* miner: expose consensus engine running status to miner
* eth, miner: specify etherbase when miner created
* miner: commit new work when consensus engine is started
* consensus, miner: fix some logics
* all: delete useless interfaces
* consensus: polish a bit
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.
In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
* common: delete StringToAddress, StringToHash
These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).
* eth/filters: remove incorrect use of common.BytesToAddress
The first should address a long term issue where we recommend a gas
price that is greater than that required for 50% of transactions in
recent blocks, which can lead to gas price inflation as people take
this figure and add a margin to it, resulting in a positive feedback
loop.
* core/types, core/vm, eth, tests: regenerate gencodec files
* Makefile: update devtools target
Install protoc-gen-go and print reminders about npm, solc and protoc.
Also switch to github.com/kevinburke/go-bindata because it's more
maintained.
* contracts/ens: update contracts and regenerate with solidity v0.4.19
The newer upstream version of the FIFSRegistrar contract doesn't set the
resolver anymore. The resolver is now deployed separately.
* contracts/release: regenerate with solidity v0.4.19
* contracts/chequebook: fix fallback and regenerate with solidity v0.4.19
The contract didn't have a fallback function, payments would be rejected
when compiled with newer solidity. References to 'mortal' and 'owned'
use the local file system so we can compile without network access.
* p2p/discv5: regenerate with recent stringer
* cmd/faucet: regenerate
* dashboard: regenerate
* eth/tracers: regenerate
* internal/jsre/deps: regenerate
* dashboard: avoid sed -i because it's not portable
* accounts/usbwallet/internal/trezor: fix go generate warnings
Updated use of Parallel and added some subtests to help isolate
them. Increased timeout in RequestHeadersByNumber so it
doesn't time out and causes other tests to break.
* cmd, consensus, eth: split ethash related config to it own
* eth, consensus: minor polish
* eth, consenus, console: compress pow testing config field to single one
* consensus, eth: document pow mode
* eth, internal: Implement using trie diffs
* eth, internal: Changes in response to review
* eth: More fixes to getModifiedAccountsBy*
* eth: minor polishes on error capitalization
This PR implements the new LES protocol version extensions:
* new and more efficient Merkle proofs reply format (when replying to
a multiple Merkle proofs request, we just send a single set of trie
nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
included in AnnounceMsg to provide an option for "very light
clients" (mobile/embedded devices) to skip expensive Ethash check
and accept multiple signatures of somewhat trusted servers (still a
lot better than trusting a single server completely and retrieving
everything through RPC). The new client mode is not implemented in
this PR, just the protocol extension.
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.
This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment
* core: reduce txpool event loop goroutines and sync structs
* cmd, core, eth: journal local transactions to disk
* core: journal replacement pending transactions too
* core: separate transaction journal from pool
* core: remove redundant storage of transactions and receipts
* core, eth, internal: new transaction schema usage polishes
* eth: implement upgrade mechanism for db deduplication
* core, eth: drop old sequential key db upgrader
* eth: close last iterator on successful db upgrage
* core: prefix the lookup entries to make their purpose clearer