Commit Graph

2673 Commits

Author SHA1 Message Date
Felix Lange c7a8bcecbe
core/types: add length check in CalcRequestsHash (#30829)
The existing implementation is correct when building and verifying
blocks, since we will only collect non-empty requests into the block
requests list.

But it isn't correct for cases where a requests list containing empty
items is sent by the consensus layer on the engine API. We want to
ensure that empty requests do not cause a difference in validation
there, so the commitment computation should explicitly skip them.
2024-11-28 18:43:39 +01:00
Felix Lange db8eed860d
all: exclude empty outputs in requests commitment (#30670)
Implements changes from these spec PRs:

- https://github.com/ethereum/EIPs/pull/8989
- https://github.com/ethereum/execution-apis/pull/599
2024-11-28 11:48:50 +01:00
rjl493456442 8c1a36dad3
core/state/snapshot: handle legacy journal (#30802)
This workaround is meant to minimize the possibility for snapshot generation
once the geth node upgrades to new version (specifically #30752 )

In #30752, the journal format in state snapshot is modified by removing
the destruct set. Therefore, the existing old format (version = 0) will be
discarded and all in-memory layers will be lost. Unfortunately, the lost 
in-memory layers can't be recovered by some other approaches, and the 
entire state snapshot will be regenerated (it will last about 2.5 hours).

This pull request introduces a workaround to adopt the legacy journal if
the destruct set contained is empty. Since self-destruction has been
deprecated following the cancun fork, the destruct set is expected to be nil for
layers above the fork block. However, an exception occurs during contract 
deployment: pre-funded accounts may self-destruct, causing accounts with 
non-zero balances to be removed from the state. For example,
https://etherscan.io/tx/0xa087333d83f0cd63b96bdafb686462e1622ce25f40bd499e03efb1051f31fe49).


For nodes with a fully synced state, the legacy journal is likely compatible with
the updated definition, eliminating the need for regeneration. Unfortunately,
nodes performing a full sync of historical chain segments or encountering 
pre-funded account deletions may face incompatibilities, leading to automatic 
snapshot regeneration.
2024-11-28 11:21:31 +08:00
wangjingcun e0deac7f6f
core: better document reason for dropping error on return (#30811)
Add a comment for error return of nil

Signed-off-by: wangjingcun <wangjingcun@aliyun.com>
2024-11-27 07:17:03 +01:00
rjl493456442 a11b4bebcb
Revert "core/state/snapshot: simplify snapshot rebuild (#30772)" (#30810)
This reverts commit 23800122b3.

The original pull request introduces a bug and some flaky tests are
detected because of this flaw.

```
--- FAIL: TestRecoverSnapshotFromWipingCrash (0.27s)
    blockchain_snapshot_test.go:158: The disk layer is not integrated snapshot is not constructed
{"pc":0,"op":88,"gas":"0x7148","gasCost":"0x2","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PC"}
{"pc":1,"op":255,"gas":"0x7146","gasCost":"0x1db0","memSize":0,"stack":["0x0"],"depth":1,"refund":0,"opName":"SELFDESTRUCT"}
{"output":"","gasUsed":"0x0"}
{"output":"","gasUsed":"0x1db2"}
{"pc":0,"op":116,"gas":"0x13498","gasCost":"0x3","memSize":0,"stack":[],"depth":1,"refund":0,"opName":"PUSH21"}
```

Before the original PR, the snapshot would block the function until the
disk layer
was fully generated under the following conditions:

(a) explicitly required by users with `AsyncBuild = false`.
(b) the snapshot was being fully rebuilt or *the disk layer generation
had resumed*.

Unfortunately, with the changes introduced in that PR, the snapshot no
longer waits
for disk layer generation to complete if the generation is resumed. It
brings lots of
uncertainty and breaks this tiny debug feature.
2024-11-26 11:33:59 +01:00
Nebojsa Urosevic d7e7b54190
core/tracing: add GetCodeHash to StateDB (#30784)
This PR extends the tracing.StateDB interface by adding a GetCodeHash function.
2024-11-26 08:16:00 +01:00
Arran Schlosberg 23800122b3
core/state/snapshot: simplify snapshot rebuild (#30772)
This PR is purely for improved readability; I was doing work involving
the file and think this may help others who are trying to understand
what's going on.

1. `snapshot.Tree.Rebuild()` now returns a function that blocks until
regeneration is complete, allowing `Tree.waitBuild()` to be removed
entirely as all it did was search for the `done` channel behind this new
function.
2. Its usage inside `New()` is also simplified by (a) only waiting if
`!AsyncBuild`; and (b) avoiding the double negative of `if !NoBuild`.

---------

Co-authored-by: Martin HS <martin@swende.se>
2024-11-25 13:43:23 +01:00
rjl493456442 6485d5e3ff
core, triedb: remove destruct flag in state snapshot (#30752)
This pull request removes the destruct flag from the state snapshot to
simplify the code.

Previously, this flag indicated that an account was removed during a
state transition, making all associated storage slots inaccessible.
Because storage deletion can involve a large number of slots, the actual
deletion is deferred until the end of the process, where it is handled
in batches.

With the deprecation of self-destruct in the Cancun fork, storage
deletions are no longer expected. Historically, the largest storage
deletion event in Ethereum was around 15 megabytes—manageable in memory.

In this pull request, the single destruct flag is replaced by a set of
deletion markers for individual storage slots. Each deleted storage slot
will now appear in the Storage set with a nil value.

This change will simplify a lot logics, such as storage accessing,
storage flushing, storage iteration and so on.
2024-11-22 16:55:43 +08:00
wangjingcun 16f2f7155f
all: typos in comments (#30779)
fixes some typos
2024-11-22 09:02:45 +01:00
rjl493456442 a25be32fa9
core, eth, internal, miner: remove unnecessary parameters (#30776)
Follow-up to #30745 , this change removes some unnecessary parameters.
2024-11-22 08:17:32 +01:00
rjl493456442 e3d61e6db0
core, eth, internal, cmd: rework EVM constructor (#30745)
This pull request refactors the EVM constructor by removing the
TxContext parameter.

The EVM object is frequently overused. Ideally, only a single EVM
instance should be created and reused throughout the entire state
transition of a block, with the transaction context switched as needed
by calling evm.SetTxContext.

Unfortunately, in some parts of the code, the EVM object is repeatedly
created, resulting in unnecessary complexity. This pull request is the
first step towards gradually improving and simplifying this setup.

---------

Co-authored-by: Martin Holst Swende <martin@swende.se>
2024-11-20 12:35:52 +01:00
Martin HS 6d3d252a5e
core/vm/program: evm bytecode-building utility (#30725)
In many cases, there is a need to create somewhat nontrivial bytecode. A
recent example is the verkle statetests, where we want a `CREATE2`- op
to create a contract, which can then be invoked, and when invoked does a
selfdestruct-to-self.

It is overkill to go full solidity, but it is also a bit tricky do
assemble this by concatenating bytes. This PR takes an approach that
has been used in in goevmlab for several years.

Using this utility, the case can be expressed as: 
```golang
	// Some runtime code
	runtime := program.New().Ops(vm.ADDRESS, vm.SELFDESTRUCT).Bytecode()
	// A constructor returning the runtime code
	initcode := program.New().ReturnData(runtime).Bytecode()
	// A factory invoking the constructor
	outer := program.New().Create2AndCall(initcode, nil).Bytecode()
```

We have a lot of places in the codebase where we concatenate bytes, cast
from `vm.OpCode` . By taking tihs approach instead, thos places can be made a
bit more maintainable/robust.
2024-11-20 08:40:21 +01:00
jwasinger 581e2140f2
core/txpool, eth/catalyst: clear transaction pool in Rollback (#30534)
This adds an API method `DropTransactions` to legacy pool, blob pool and
txpool interface. This method removes all txs currently tracked in the
pools.

It modifies the simulated beacon to use the new method in `Rollback`
which removes previous hacky implementation that also erroneously reset
the gas tip to 1 gwei.

---------

Co-authored-by: Felix Lange <fjl@twurst.com>
2024-11-19 13:35:52 +01:00
bitcoin-lightning 83790b0729
core: fix typos (#30767) 2024-11-19 14:26:39 +08:00
Martin HS ec280e030f
core/state: tests on the binary iterator (#30754)
Fixes an error in the binary iterator, adds additional testcases

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2024-11-15 07:59:06 +01:00
rjl493456442 74ef47462f
core/state, triedb/database: refactor state reader (#30712)
Co-authored-by: Martin HS <martin@swende.se>
2024-11-09 08:08:06 +08:00
Karol Chojnowski 3c7336b0e9
core/state: invoke OnCodeChange-hook on selfdestruct (#30686)
This change invokes the OnCodeChange hook when selfdestruct operation is performed, and a contract is removed. This is an event which can be consumed by tracers.
2024-11-08 15:25:30 +01:00
Martin HS e56bbd77a4
core/state: small fix in hooked statedb (#30732)
fixes a very tiny bug
2024-11-05 18:29:37 +01:00
Martin HS da17f2d65b
all: fix issues with benchmarks (#30667)
This PR fixes some issues with benchmarks

- [x] Removes log output from a log-test
- [x] Avoids a `nil`-defer in `triedb/pathdb`
- [x] Fixes some crashes re tracers
- [x] Refactors a very resource-expensive benchmark for blobpol.
**NOTE**: this rewrite touches live production code (a little bit), as
it makes the validator-function used by the blobpool configurable.
- [x] Switch some benches over to use pebble over leveldb
- [x] reduce mem overhead in the setup-phase of some tests
- [x] Marks some tests with a long setup-phase to be skipped if `-short`
is specified (where long is on the order of tens of seconds). Ideally,
in my opinion, one should be able to run with `-benchtime 10ms -short`
and sanity-check all tests very quickly.
- [x]  Drops some metrics-bechmark which times the speed of `copy`.

---------

Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
2024-11-04 15:10:12 +01:00
Guillaume Ballet 06cbc80754
core, trie: verkle state processor tests (#30672)
Tests that are crucial to for verifying the verkle testnet functions properly.

---------

Signed-off-by: Guillaume Ballet <3272758+gballet@users.noreply.github.com>
Co-authored-by: Ignacio Hagopian <jsign.uy@gmail.com>
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Martin HS <martin@swende.se>
2024-11-04 14:19:50 +01:00
Martin HS 014e2b037f
core/vm/runtime: invoke tx-end hook (#30711)
When using the `core/vm/runtime` helpers to execute code, callbacks for the tx end were not invoked. This change fixes it by invoking them.
2024-11-04 11:39:06 +01:00
piersy 484f0f4e84
core/txpool: improve error responses with wrapped errors (#30715) 2024-11-04 12:32:41 +02:00
lightclient 9afb18dd6f
core: add code to witness when state object is accessed (#30698)
I think the core code should generally be agnostic about the witness and
the statedb layer should determine what elements need to be included in
the witness. Because code is accessed via `GetCode`, and
`GetCodeLength`, the statedb will always know when it needs to add that
code into the witness.

The edge case is block hashes, so we continue to add them manually in
the implementation of `BLOCKHASH`.

It probably makes sense to refactor statedb so we have a wrapped
implementation that accumulates the witness, but this is a simpler
change that makes #30078 less aggressive.
2024-10-31 12:19:01 +02:00
Martin HS 25bc07749c
core/vm: speed up push and interpreter loop (#30662)
Looking at the cpu profile of a burntpix benchmark, I noticed that a lot
of time was spent in gas-used, in the interpreter loop. It's an actual
call (not inlined), which explicitly wants to be ignored by tracing
("tracing.GasChangeIgnored"), so it can be safely and simply inlined.

The other change is in `pushX`. These also do a call to
`common.RightPadBytes`. I replaced that by a doing a corresponding `Lsh`
on the `u256` if needed. Note: it's needed only to make the stack output
look right, for fuzzers. It technically doesn't matter what we put
there: if code ends on a pushdata immediate, nothing will consume the
stack element. We could just as well just ignore it, if we didn't care
about fuzzers (which I do).

Seems quite a lot faster on burntpix, according to my runs. 

This PR:
```
EVM gas used:    5642735088
execution time:  34.84609475s
allocations:     915683
allocated bytes: 175334088
```
```
EVM gas used:    5642735088
execution time:  36.671958278s
allocations:     915701
allocated bytes: 175340528
```

Master
```
EVM gas used:    5642735088
execution time:  49.349209526s
allocations:     915684
allocated bytes: 175333368
```
```
EVM gas used:    5642735088
execution time:  46.581006598s
allocations:     915681
allocated bytes: 175330728
```

---------

Co-authored-by: Sina M <1591639+s1na@users.noreply.github.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
2024-10-30 18:01:47 +01:00
Marius van der Wijden 236147bf70
ethdb: refactor Database interface (#30693) 2024-10-29 10:32:40 +02:00
Péter Szilágyi 7180d26530
core, eth, node: break rawdb -> {leveldb, pebble} dependency (#30689) 2024-10-29 10:31:04 +02:00
Felföldi Zsolt 80bdab757d
ethdb: add DeleteRange feature (#30668)
This PR adds `DeleteRange` to `ethdb.KeyValueWriter`. While range
deletion using an iterator can be really slow, `DeleteRange` is natively
supported by pebble and apparently runs in O(1) time (typically 20-30ms
in my tests for removing hundreds of millions of keys and gigabytes of
data). For leveldb and memorydb an iterator based fallback is
implemented. Note that since the iterator method can be slow and a
database function should not unexpectedly block for a very long time,
the number of deleted keys is limited at 10000 which should ensure that
it does not block for more than a second. ErrTooManyKeys is returned if
the range has only been partially deleted. In this case the caller can
repeat the call until it finally succeeds.
2024-10-25 17:33:46 +02:00
jwasinger 24c5493bec
core/vm: remove debug printout in eof test (#30665) 2024-10-24 09:13:01 +02:00
Sina M 461afdf665
core: fix tracing of system calls (#30666)
This change makes it so that the wrapped statedb with tracing-hooks is passed to the system call processing

Fixes #30658
2024-10-24 09:11:47 +02:00
jwasinger 478012ab23
all: remove TerminalTotalDifficultyPassed (#30609)
rebased https://github.com/ethereum/go-ethereum/pull/29766 . The
downstream branch appears to have been deleted and I don't have perms to
push to that fork.

`TerminalTotalDifficultyPassed` is removed. `TerminalTotalDifficulty`
must now be non-nil, and it is expected that networks are already
merged: we can only import PoW/Clique chains, not produce blocks on
them.

---------

Co-authored-by: stevemilk <wangpeculiar@gmail.com>
2024-10-23 08:26:18 +02:00
Martin HS 459bb4a647
core/state: move state log mechanism to a separate layer (#30569)
This PR moves the logging/tracing-facilities out of `*state.StateDB`,
in to a wrapping struct which implements `vm.StateDB` instead.

In most places, it is a pretty straight-forward change: 
- First, hoisting the invocations from state objects up to the statedb. 
- Then making the mutation-methods simply return the previous value, so
that the external logging layer could log everything.

Some internal code uses the direct object-accessors to mutate the state,
particularly in testing and in setting up state overrides, which means
that these changes are unobservable for the hooked layer. Thus, configuring
the overrides are not necessarily part of the API we want to publish.

The trickiest part about the layering is that when the selfdestructs are
finally deleted during `Finalise`, there's the possibility that someone
sent some ether to it, which is burnt at that point, and thus needs to
be logged. The hooked layer reaches into the inner layer to figure out
these events.

In package `vm`, the conversion from `state.StateDB + hooks` into a
hooked `vm.StateDB` is performed where needed.

---------

Co-authored-by: Gary Rong <garyrong0905@gmail.com>
2024-10-23 08:03:36 +02:00
Péter Szilágyi a5fe7353cf
common: drop BigMin and BigMax, they pollute our dep graph (#30645)
Way back we've added `common.math.BigMin` and `common.math.BigMax`.
These were kind of cute helpers, but unfortunate ones, because package
all over out codebase added dependencies to this package just to avoid
having to write out 3 lines of code.

Because of this, we've also started having package name clashes with the
stdlib `math`, which got solves even more badly by moving some helpers
over ***from*** the stdlib into our custom lib (e.g. MaxUint64). The
latter ones were nuked out in a previous PR and this PR nukes out BigMin
and BigMax, inlining them at all call sites.

As we're transitioning to uint256, if need be, we can add a min and max
to that.
2024-10-21 12:45:33 +03:00
Péter Szilágyi 48d05c43c9
all: get rid of custom MaxUint64 and MaxUint64 (#30636) 2024-10-20 14:41:51 +03:00
Péter Szilágyi babd5d8026
core/state: fix runaway alloc caused by prefetcher heap escape (#30629)
Co-authored-by: lightclient <lightclient@protonmail.com>
2024-10-20 13:25:15 +03:00
rjl493456442 b6c62d5887
core, trie, triedb: minor changes from snapshot integration (#30599)
This change ports some non-important changes from https://github.com/ethereum/go-ethereum/pull/30159, including interface renaming and some trivial refactorings.
2024-10-18 17:06:31 +02:00
Péter Szilágyi afea3bd49c
beacon/engine, core/txpool, eth/catalyst: add engine_getBlobsV1 API (#30537) 2024-10-17 19:27:35 +03:00
Sina M 978ca5fc5e
eth/tracers: various fixes (#30540)
Breaking changes:

- The ChainConfig was exposed to tracers via VMContext passed in
`OnTxStart`. This is unnecessary specially looking through the lens of
live tracers as chain config remains the same throughout the lifetime of
the program. It was there so that native API-invoked tracers could
access it. So instead we moved it to the constructor of API tracers.

Non-breaking:

- Change the default config of the tracers to be `{}` instead of nil.
This way an extra nil check can be avoided.

Refactoring:

- Rename `supply` struct to `supplyTracer`.
- Un-export some hook definitions.
2024-10-17 06:51:47 +02:00
Marius van der Wijden 18a591811f
core: reduce peak memory usage during reorg (#30600)
~~Opening this as a draft to have a discussion.~~ Pressed the wrong
button
I had [a previous PR
](https://github.com/ethereum/go-ethereum/pull/24616)a long time ago
which reduced the peak memory used during reorgs by not accumulating all
transactions and logs.
This PR reduces the peak memory further by not storing the blocks in
memory.
However this means we need to pull the blocks back up from storage
multiple times during the reorg.
I collected the following numbers on peak memory usage: 

// Master: BenchmarkReorg-8 10000 899591 ns/op 820154 B/op 1440
allocs/op 1549443072 bytes of heap used
// WithoutOldChain: BenchmarkReorg-8 10000 1147281 ns/op 943163 B/op
1564 allocs/op 1163870208 bytes of heap used
// WithoutNewChain: BenchmarkReorg-8 10000 1018922 ns/op 943580 B/op
1564 allocs/op 1171890176 bytes of heap used

Each block contains a transaction with ~50k bytes and we're doing a 10k
block reorg, so the chain should be ~500MB in size

---------

Co-authored-by: Péter Szilágyi <peterke@gmail.com>
2024-10-16 19:46:40 +03:00
Péter Szilágyi 368e16f39d
core, eth, ethstats: simplify chain head events (#30601) 2024-10-16 10:32:58 +03:00
rjl493456442 15bf90ebc5
core, ethdb/pebble: run pebble in non-sync mode (#30573)
Implements https://github.com/ethereum/go-ethereum/issues/29819
2024-10-15 18:10:03 +03:00
Martin HS 5adc314817
build: update to golangci-lint 1.61.0 (#30587)
Changelog: https://golangci-lint.run/product/changelog/#1610 

Removes `exportloopref` (no longer needed), replaces it with
`copyloopvar` which is basically the opposite.

Also adds: 
- `durationcheck`
- `gocheckcompilerdirectives`
- `reassign`
- `mirror`
- `tenv`

---------

Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
2024-10-14 19:25:22 +02:00
Felix Lange 16f64098b9
core: enable EIP-2935 in chain maker (#30575) 2024-10-13 18:51:51 +02:00
Felix Lange 3a5313f3f3
all: implement EIP-7002 & EIP-7251 (#30571)
This is a redo of #29052 based on newer specs. Here we implement EIPs
scheduled for the Prague fork:

- EIP-7002: Execution layer triggerable withdrawals
- EIP-7251: Increase the MAX_EFFECTIVE_BALANCE

Co-authored-by: lightclient <lightclient@protonmail.com>
2024-10-11 21:36:13 +02:00
Karol Chojnowski 16bf471151
core/tracing: add GetTransientState method to StateDB interface (#30531)
Allows live custom tracers to access contract transient storage through the StateDB interface.
2024-10-10 13:03:03 +02:00
Martin HS 58cf152e98
eth/catalyst, core/txpool/blobpool: make tests output less logs (#30563)
A couple of tests set the debug level to `TRACE` on stdout,
and all subsequent tests in the same package are also affected
by that, resulting in outputs of tens of megabytes. 

This PR removes such calls from two packages where it was prevalent.
This makes getting a summary of failing tests simpler, and possibly
reduces some strain from the CI pipeline.
2024-10-10 07:54:07 +02:00
Felix Lange 2936b41514
all: implement flat deposit requests encoding (#30425)
This implements recent changes to EIP-7685, EIP-6110, and
execution-apis.

---------

Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Shude Li <islishude@gmail.com>
2024-10-09 12:24:58 +02:00
Martin HS 56c4f2bfd4
core/vm, cmd/evm: implement eof validation (#30418)
The bulk of this PR is authored by @lightclient , in the original
EOF-work. More recently, the code has been picked up and reworked for the new EOF
specification, by @MariusVanDerWijden , in https://github.com/ethereum/go-ethereum/pull/29518, and also @shemnon has contributed with fixes.

This PR is an attempt to start eating the elephant one small bite at a
time, by selecting only the eof-validation as a standalone piece which
can be merged without interfering too much in the core stuff.

In this PR: 

- [x] Validation of eof containers, lifted from #29518, along with
test-vectors from consensus-tests and fuzzing, to ensure that the move
did not lose any functionality.
- [x] Definition of eof opcodes, which is a prerequisite for validation
- [x] Addition of `undefined` to a jumptable entry item. I'm not
super-happy with this, but for the moment it seems the least invasive
way to do it. A better way might be to go back and allowing nil-items or
nil execute-functions to denote "undefined".
- [x] benchmarks of eof validation speed 


---------

Co-authored-by: lightclient <lightclient@protonmail.com>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Danno Ferrin <danno.ferrin@shemnon.com>
2024-10-02 15:05:50 +02:00
rjl493456442 eff0bed91b
core/rawdb: freezer index repair (#29792)
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for 
performance gain.

Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns 
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information). 

In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.

---

**But why do we need the `fsync` in the first place?** 

As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage. 

We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.

Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.

---

**How can we maintain this guarantee without relying on fsync?**

Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:

For two consecutive index items:

- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid

And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)

With these validation rules, we can detect the invalid item in index
file with greatest possibility.

--- 

But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:

**All items in index file are in zero value**

It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.

However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.

**Index file is integral while relative data file is corrupted**

It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.

We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.
2024-10-01 18:16:16 +02:00
minh-bq 0a21cb4d21
core/txpool/blobpool: use types.Sender instead of signer.Sender (#30473)
Use types.Sender(signer, tx) to utilize the transaction's sender cache
and avoid repeated address recover.
2024-09-30 12:06:10 +03:00
Péter Szilágyi 1df75dbe36
Revert "core/txpool, eth/catalyst: ensure gas tip retains current value upon rollback" (#30521)
Reverts ethereum/go-ethereum#30495

You are free to create a proper Clear method if that's the best way. But
one that does a proper cleanup, not some hacky call to set gas which
screws up logs, metrics and everything along the way. Also doesn't work
for legacy pool local transactions.

The current code had a hack in the simulated code, now we have a hack in
live txpooling code. No, that's not acceptable. I want the live code to
be proper, meaningful API, meaningful comments, meaningful
implementation.
2024-09-27 13:56:25 +03:00