Commit Graph

10 Commits

Author SHA1 Message Date
Guillaume Ballet 7f6c045e0d
core: remove unused ContractCode method from BlockChain (#27186) 2023-05-02 04:56:08 -04:00
Péter Szilágyi bbc565ab05
core/types, params: add blob transaction type, RLP encoded for now (#27049)
* core/types, params: add blob transaction type, RLP encoded for now

* all: integrate Cancun (and timestamp based forks) into MakeSigner

* core/types: fix 2 back-and-forth type refactors

* core: fix review comment

* core/types: swap blob tx type id to 0x03
2023-04-21 12:52:02 +03:00
Péter Szilágyi cd31f2dee2
all: change chain head markers from block to header (#26777) 2023-03-02 08:29:15 +02:00
rjl493456442 743e404906
core, eth, les, tests, trie: abstract node scheme (#25532)
This PR introduces a node scheme abstraction. The interface is only implemented by `hashScheme` at the moment, but will be extended by `pathScheme` very soon.

Apart from that, a few changes are also included which is worth mentioning:

-  port the changes in the stacktrie, tracking the path prefix of nodes during commit
-  use ethdb.Database for constructing trie.Database. This is not necessary right now, but it is required for path-based used to open reverse diff freezer
2022-11-28 14:31:28 +01:00
Felix Lange 9afc6816d2
common/lru: add generic LRU implementation (#26162)
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.

Two reasons to do this:

- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
   allocations in Add when the cache is at capacity. Overall, memory usage of the cache
   is much reduced because keys are values are no longer wrapped in interface.
2022-11-14 15:41:56 +01:00
Péter Szilágyi 81bd998353
core, eth/downloader: handle spurious junk bodies from racey rollbacks (#25578)
* eth/downloader: handle junkbodies/receipts in the beacon sync

* core: check for header presence when checking for blocks
2022-08-23 14:02:51 +03:00
Marius van der Wijden c6dcd018d2
core: eth: rpc: implement safe rpc block (#25165)
* core: eth: rpc: implement safe rpc block

* core: fix setHead, panics
2022-07-25 18:42:05 +03:00
Marius van der Wijden e6fa102eb0
core, eth, internal, rpc: implement final block (#24282)
* eth: core: implement finalized block

* eth/catalyst: fix final block

* eth/catalyst: update finalized head gauge

* internal/jsre/deps: updated web3.js to allow for finalized block

* eth/catalyst: make sure only one thread can call fcu

* eth/catalyst: nitpicks

* eth/catalyst: use plain mutex

* eth: nitpicks
2022-05-18 17:30:42 +03:00
Martin Holst Swende db03faa10d
core, eth: improve delivery speed on header requests (#23105)
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.

For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:

- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
  the request is by number), and another for reading by hash (this latter one is sometimes
  cached).
  
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
  the request is by number), and another for reading the header itself. After this, it
  also performes a hashing of the header, to ensure that the hash is what it expected. In
  this PR, I instead move the logic for "give me a sequence of blocks" into the lower
  layers, where the database can determine how and what to read from leveldb and/or
  ancients.

There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.

The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-12-07 17:50:58 +01:00
Marius van der Wijden c641cff51a
core: refactored blockchain.go (#23735) 2021-10-18 10:45:59 +03:00