* core/beacon: eth/catalyst: updated engine api to new version
* core: implement exchangeTransitionConfig
* core/beacon: prevRandao instead of Random
* eth/catalyst: Fix ExchangeTransitionConfig, add test
* eth/catalyst: stop external miners on TTD reached
* node: implement --authrpc.vhosts flag
* core: allow for config override on non-mainnet networks
* eth/catalyst: fix peters comments
* eth/catalyst: make stop remote sealer more explicit
* eth/catalyst: add log output
* cmd/utils: rename authrpc.host to authrpc.addr
* eth/catalyst: disable the disabling of the miner
* eth: core: remove notion of terminal pow block
* eth: les: more of peters nitpicks
* eth/downloader: implement beacon sync
* eth/downloader: fix a crash if the beacon chain is reduced in length
* eth/downloader: fix beacon sync start/stop thrashing data race
* eth/downloader: use a non-nil pivot even in degenerate sync requests
* eth/downloader: don't touch internal state on beacon Head retrieval
* eth/downloader: fix spelling mistakes
* eth/downloader: fix some typos
* eth: integrate legacy/beacon sync switchover and UX
* eth: handle UX wise being stuck on post-merge TTD
* core, eth: integrate the beacon client with the beacon sync
* eth/catalyst: make some warning messages nicer
* eth/downloader: remove Ethereum 1&2 notions in favor of merge
* core/beacon, eth: clean up engine API returns a bit
* eth/downloader: add skeleton extension tests
* eth/catalyst: keep non-kiln spec, handle mining on ttd
* eth/downloader: add beacon header retrieval tests
* eth: fixed spelling, commented failing tests out
* eth/downloader: review fixes
* eth/downloader: drop peers failing to deliver beacon headers
* core/rawdb: track beacon sync data in db inspect
* eth: fix review concerns
* internal/web3ext: nit
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* core/rawdb, cmd, ethdb, eth: implement freezer tail deletion
* core/rawdb: address comments from martin and sina
* core/rawdb: fixes cornercase in tail deletion
* core/rawdb: separate metadata into a standalone file
* core/rawdb: remove unused code
* core/rawdb: add random test
* core/rawdb: polish code
* core/rawdb: fsync meta file before manipulating the index
* core/rawdb: fix typo
* core/rawdb: address comments
This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
This PR adds an addtional API called `NewBatchWithSize` for db
batcher. It turns out that leveldb batch memory allocation is
super inefficient. The main reason is the allocation step of
leveldb Batch is too small when the batch size is large. It can
take a few second to build a leveldb batch with 100MB size.
Luckily, leveldb also offers another API called MakeBatch which can
pre-allocate the memory area. So if the approximate size of batch is
known in advance, this API can be used in this case.
It's needed in new state scheme PR which needs to commit a batch of
trie nodes in a single batch. Implement the feature in a seperate PR.
* cmd/geth: add db cmd to show metadata
* cmd/geth: better output generator status
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
* cmd: minor
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* freezer: add readonly flag to table
* freezer: enforce readonly in table repair
* freezer: enforce readonly in newFreezer
* minor fix
* minor
* core/rawdb: test that writing during readonly fails
* rm unused log
* check readonly on batch append
* minor
* Revert "check readonly on batch append"
This reverts commit 2ddb5ec4ba.
* review fixes
* minor test refactor
* attempt at fixing windows issue
* add comment re windows sync issue
* k->kind
* open readonly db for genesis check
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core: implement eip-4399 random opcode
* core: make vmconfig threadsafe
* core: miner: pass vmConfig by value not reference
* all: enable 4399 by Rules
* core: remove diff (f)
* tests: set proper difficulty (f)
* smaller diff (f)
* eth/catalyst: nit
* core: make RANDOM a pointer which is only set post-merge
* cmd/evm/internal/t8ntool: fix t8n tracing of 4399
* tests: set difficulty
* cmd/evm/internal/t8ntool: check that baserules are london before applying the merge chainrules
* core/vm: reverse bit order in bytes of code bitmap
This bit order is more natural for bit manipulation operations and we
can eliminate some small number of CPU instructions.
* core/vm: drop lookup table
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.
For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:
- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
the request is by number), and another for reading by hash (this latter one is sometimes
cached).
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
the request is by number), and another for reading the header itself. After this, it
also performes a hashing of the header, to ensure that the hash is what it expected. In
this PR, I instead move the logic for "give me a sequence of blocks" into the lower
layers, where the database can determine how and what to read from leveldb and/or
ancients.
There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.
The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR fixes a special corner case in transaction indexing.
When the chain is rewound by SetHead to a historical point which is even lower than the transaction indexes tail, then system will report Failed to decode block body error all the time, because the relevant blocks are already deleted.
In order to avoid this "non-critical-but-annoying" issue, we can recap the indexing target to head+1(to is excluded, so it means indexing transactions from 0 to head).
* core/vm: Remove interpreter loop interruption check
* core/vm: Unit test for interpreter loop interruption
* core/vm: Check for interpreter loop abort on every jump
* core/vm: Move interpreter.ReadOnly check into the opcode implementations
Also remove the same check from the interpreter inner loop.
* core/vm: Remove obsolete operation.writes flag
* core/vm: Capture fault states in logger
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/vm: Remove panic added for testing
Co-authored-by: Martin Holst Swende <martin@swende.se>