Commit Graph

7 Commits

Author SHA1 Message Date
rjl493456442 eff0bed91b
core/rawdb: freezer index repair (#29792)
This pull request removes the `fsync` of index files in freezer.ModifyAncients function for 
performance gain.

Originally, fsync is added after each freezer write operation to ensure
the written data is truly transferred into disk. Unfortunately, it turns 
out `fsync` can be relatively slow, especially on
macOS (see https://github.com/ethereum/go-ethereum/issues/28754 for more
information). 

In this pull request, fsync for index file is removed as it turns out
index file can be recovered even after a unclean shutdown. But fsync for data file is still kept, as
we have no meaningful way to validate the data correctness after unclean shutdown.

---

**But why do we need the `fsync` in the first place?** 

As it's necessary for freezer to survive/recover after the machine crash
(e.g. power failure).
In linux, whenever the file write is performed, the file metadata update
and data update are
not necessarily performed at the same time. Typically, the metadata will
be flushed/journalled
ahead of the file data. Therefore, we make the pessimistic assumption
that the file is first
extended with invalid "garbage" data (normally zero bytes) and that
afterwards the correct
data replaces the garbage. 

We have observed that the index file of the freezer often contain
garbage entry with zero value
(filenumber = 0, offset = 0) after a machine power failure. It proves
that the index file is extended
without the data being flushed. And this corruption can destroy the
whole freezer data eventually.

Performing fsync after each write operation can reduce the time window
for data to be transferred
to the disk and ensure the correctness of the data in the disk to the
greatest extent.

---

**How can we maintain this guarantee without relying on fsync?**

Because the items in the index file are strictly in order, we can
leverage this characteristic to
detect the corruption and truncate them when freezer is opened.
Specifically these validation
rules are performed for each index file:

For two consecutive index items:

- If their file numbers are the same, then the offset of the latter one
MUST not be less than that of the former.
- If the file number of the latter one is equal to that of the former
plus one, then the offset of the latter one MUST not be 0.
- If their file numbers are not equal, and the latter's file number is
not equal to the former plus 1, the latter one is valid

And also, for the first non-head item, it must refer to the earliest
data file, or the next file if the
earliest file is not sufficient to place the first item(very special
case, only theoretical possible
in tests)

With these validation rules, we can detect the invalid item in index
file with greatest possibility.

--- 

But unfortunately, these scenarios are not covered and could still lead
to a freezer corruption if it occurs:

**All items in index file are in zero value**

It's impossible to distinguish if they are truly zero (e.g. all the data
entries maintained in freezer
are zero size) or just the garbage left by OS. In this case, these index
items will be kept by truncating
the entire data file, namely the freezer is corrupted.

However, we can consider that the probability of this situation
occurring is quite low, and even
if it occurs, the freezer can be considered to be close to an empty
state. Rerun the state sync
should be acceptable.

**Index file is integral while relative data file is corrupted**

It might be possible the data file is corrupted whose file size is
extended correctly with garbage
filled (e.g. zero bytes). In this case, it's impossible to detect the
corruption by index validation.

We can either choose to `fsync` the data file, or blindly believe that
if index file is integral then
the data file could be integral with very high chance. In this pull
request, the first option is taken.
2024-10-01 18:16:16 +02:00
rjl493456442 326fa00759
core/rawdb: fsync the index file after each freezer write (#28483)
* core/rawdb: fsync the index and data file after each freezer write

* core/rawdb: fsync the data file in freezer after write
2023-11-10 12:56:39 +02:00
s7v7nislands 905a723fae
core/rawdb: use atomic int added in go1.19 (#26935) 2023-03-21 07:10:23 -04:00
rjl493456442 1941c5e6c9
core/rawdb: untie freezer and ancient chain data (#24684)
Previously freezer has only been used for storing ancient chain data, while obviously it can be used more. This PR unties the chain data and freezer, keep the minimal freezer structure and move all other logic (like incrementally freezing block data) into a separate structure called ChainFreezer.

This PR also extends the database interface by adding a new ancient store function AncientDatadir which can return the root directory of ancient store. The ancient root directory can be used when we want to open some other ancient-stores (e.g. reverse diff freezer).
2022-05-06 13:28:42 +02:00
rjl493456442 538a868384
core/rawdb, cmd, ethdb, eth: implement freezer tail deletion (#23954)
* core/rawdb, cmd, ethdb, eth: implement freezer tail deletion

* core/rawdb: address comments from martin and sina

* core/rawdb: fixes cornercase in tail deletion

* core/rawdb: separate metadata into a standalone file

* core/rawdb: remove unused code

* core/rawdb: add random test

* core/rawdb: polish code

* core/rawdb: fsync meta file before manipulating the index

* core/rawdb: fix typo

* core/rawdb: address comments
2022-03-10 09:37:23 +01:00
Martin Holst Swende b7a6409cc1
core/rawdb: better error message in freezer (#23901)
* core/rawdb: better error message in freezer

* Apply suggestions from code review
2021-11-16 11:33:56 +02:00
Martin Holst Swende 794c6133ef
core/rawdb: freezer batch write (#23462)
This change is a rewrite of the freezer code.

When writing ancient chain data to the freezer, the previous version first encoded each
individual item to a temporary buffer, then wrote the buffer. For small item sizes (for
example, in the block hash freezer table), this strategy causes a lot of system calls for
writing tiny chunks of data. It also allocated a lot of temporary []byte buffers.

In the new version, we instead encode multiple items into a re-useable batch buffer, which
is then written to the file all at once. This avoids performing a system call for every
inserted item.

To make the internal batching work, the ancient database API had to be changed. While
integrating this new API in BlockChain.InsertReceiptChain, additional optimizations were
also added there.

Co-authored-by: Felix Lange <fjl@twurst.com>
2021-09-07 12:31:17 +02:00