This pull request delivers the new version of the state history, where
the raw storage key is used instead of the hash.
Before the cancun fork, it's supported by protocol to destruct a
specific account and therefore, all the storage slot owned by it should
be wiped in the same transition.
Technically, storage wiping should be performed through storage
iteration, and only the storage key hash will be available for traversal
if the state snapshot is not available. Therefore, the storage key hash
is chosen as the identifier in the old version state history.
Fortunately, account self-destruction has been deprecated by the
protocol since the Cancun fork, and there are no empty accounts eligible
for deletion under EIP-158. Therefore, we can conclude that no storage
wiping should occur after the Cancun fork. In this case, it makes no
sense to keep using hash.
Besides, another big reason for making this change is the current format
state history is unusable if verkle is activated. Verkle tree has a
different key derivation scheme (merkle uses keccak256), the preimage of
key hash must be provided in order to make verkle rollback functional.
This pull request is a prerequisite for landing verkle.
Additionally, the raw storage key is more human-friendly for those who
want to manually check the history, even though Solidity already
performs some hashing to derive the storage location.
---
This pull request doesn't bump the database version, as I believe the
database should still be compatible if users degrade from the new geth
version to old one, the only side effect is the persistent new version
state history will be unusable.
---------
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
In this pull request, the state iterator is implemented. It's mostly a copy-paste
from the original state snapshot package, but still has some important changes
to highlight here:
(a) The iterator for the disk layer consists of a diff iterator and a disk iterator.
Originally, the disk layer in the state snapshot was a wrapper around the disk,
and its corresponding iterator was also a wrapper around the disk iterator.
However, due to structural differences, the disk layer iterator is divided into
two parts:
- The disk iterator, which traverses the content stored on disk.
- The diff iterator, which traverses the aggregated state buffer.
Checkout `BinaryIterator` and `FastIterator` for more details.
(b) The staleness management is improved in the diffAccountIterator and
diffStorageIterator
Originally, in the `diffAccountIterator`, the layer’s staleness had to be checked
within the Next function to ensure the iterator remained usable. Additionally,
a read lock on the associated diff layer was required to first retrieve the account
blob. This read lock protection is essential to prevent concurrent map read/write.
Afterward, a staleness check was performed to ensure the retrieved data was
not outdated.
The entire logic can be simplified as follows: a loadAccount callback is provided
to retrieve account data. If the corresponding state is immutable (e.g., diff layers
in the path database), the staleness check can be skipped, and a single account
data retrieval is sufficient. However, if the corresponding state is mutable (e.g.,
the disk layer in the path database), the callback can operate as follows:
```go
func(hash common.Hash) ([]byte, error) {
dl.lock.RLock()
defer dl.lock.RUnlock()
if dl.stale {
return nil, errSnapshotStale
}
return dl.buffer.states.mustAccount(hash)
}
```
The callback solution can eliminate the complexity for managing
concurrency with the read lock for atomic operation.