2015-10-05 11:37:56 -05:00
|
|
|
// Copyright 2015 The go-ethereum Authors
|
|
|
|
// This file is part of the go-ethereum library.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
|
|
|
|
package trie
|
|
|
|
|
|
|
|
import (
|
2016-05-27 06:26:00 -05:00
|
|
|
"errors"
|
2015-10-05 11:37:56 -05:00
|
|
|
"fmt"
|
|
|
|
|
|
|
|
"github.com/ethereum/go-ethereum/common"
|
2018-09-03 10:33:21 -05:00
|
|
|
"github.com/ethereum/go-ethereum/common/prque"
|
2020-08-21 07:10:40 -05:00
|
|
|
"github.com/ethereum/go-ethereum/core/rawdb"
|
2018-02-05 10:40:32 -06:00
|
|
|
"github.com/ethereum/go-ethereum/ethdb"
|
2015-10-05 11:37:56 -05:00
|
|
|
)
|
|
|
|
|
2016-05-27 06:26:00 -05:00
|
|
|
// ErrNotRequested is returned by the trie sync when it's requested to process a
|
|
|
|
// node it did not request.
|
|
|
|
var ErrNotRequested = errors.New("not requested")
|
|
|
|
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// ErrAlreadyProcessed is returned by the trie sync when it's requested to process a
|
|
|
|
// node it already processed previously.
|
|
|
|
var ErrAlreadyProcessed = errors.New("already processed")
|
|
|
|
|
2020-08-26 05:05:06 -05:00
|
|
|
// maxFetchesPerDepth is the maximum number of pending trie nodes per depth. The
|
|
|
|
// role of this value is to limit the number of trie nodes that get expanded in
|
|
|
|
// memory if the node was configured with a significant number of peers.
|
|
|
|
const maxFetchesPerDepth = 16384
|
|
|
|
|
2015-10-05 11:37:56 -05:00
|
|
|
// request represents a scheduled or already in-flight state retrieval request.
|
|
|
|
type request struct {
|
2020-08-26 05:05:06 -05:00
|
|
|
path []byte // Merkle path leading to this node for prioritization
|
2016-10-21 10:34:17 -05:00
|
|
|
hash common.Hash // Hash of the node data content to retrieve
|
|
|
|
data []byte // Data content of the node, cached until all subtrees complete
|
2020-08-21 07:10:40 -05:00
|
|
|
code bool // Whether this is a code entry
|
2015-10-05 11:37:56 -05:00
|
|
|
|
|
|
|
parents []*request // Parent state nodes referencing this entry (notify all upon completion)
|
|
|
|
deps int // Number of dependencies before allowed to commit this node
|
|
|
|
|
2018-02-05 10:40:32 -06:00
|
|
|
callback LeafCallback // Callback to invoke if a leaf node it reached on this branch
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
|
2020-08-28 02:50:37 -05:00
|
|
|
// SyncPath is a path tuple identifying a particular trie node either in a single
|
|
|
|
// trie (account) or a layered trie (account -> storage).
|
|
|
|
//
|
|
|
|
// Content wise the tuple either has 1 element if it addresses a node in a single
|
|
|
|
// trie or 2 elements if it addresses a node in a stacked trie.
|
|
|
|
//
|
|
|
|
// To support aiming arbitrary trie nodes, the path needs to support odd nibble
|
|
|
|
// lengths. To avoid transferring expanded hex form over the network, the last
|
|
|
|
// part of the tuple (which needs to index into the middle of a trie) is compact
|
|
|
|
// encoded. In case of a 2-tuple, the first item is always 32 bytes so that is
|
|
|
|
// simple binary encoded.
|
|
|
|
//
|
|
|
|
// Examples:
|
|
|
|
// - Path 0x9 -> {0x19}
|
|
|
|
// - Path 0x99 -> {0x0099}
|
|
|
|
// - Path 0x01234567890123456789012345678901012345678901234567890123456789019 -> {0x0123456789012345678901234567890101234567890123456789012345678901, 0x19}
|
|
|
|
// - Path 0x012345678901234567890123456789010123456789012345678901234567890199 -> {0x0123456789012345678901234567890101234567890123456789012345678901, 0x0099}
|
|
|
|
type SyncPath [][]byte
|
|
|
|
|
|
|
|
// newSyncPath converts an expanded trie path from nibble form into a compact
|
|
|
|
// version that can be sent over the network.
|
|
|
|
func newSyncPath(path []byte) SyncPath {
|
|
|
|
// If the hash is from the account trie, append a single item, if it
|
|
|
|
// is from the a storage trie, append a tuple. Note, the length 64 is
|
|
|
|
// clashing between account leaf and storage root. It's fine though
|
|
|
|
// because having a trie node at 64 depth means a hash collision was
|
|
|
|
// found and we're long dead.
|
|
|
|
if len(path) < 64 {
|
|
|
|
return SyncPath{hexToCompact(path)}
|
|
|
|
}
|
|
|
|
return SyncPath{hexToKeybytes(path[:64]), hexToCompact(path[64:])}
|
|
|
|
}
|
|
|
|
|
2020-08-21 07:10:40 -05:00
|
|
|
// SyncResult is a response with requested data along with it's hash.
|
2015-10-05 11:37:56 -05:00
|
|
|
type SyncResult struct {
|
|
|
|
Hash common.Hash // Hash of the originally unknown trie node
|
|
|
|
Data []byte // Data content of the retrieved node
|
|
|
|
}
|
|
|
|
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// syncMemBatch is an in-memory buffer of successfully downloaded but not yet
|
|
|
|
// persisted data items.
|
|
|
|
type syncMemBatch struct {
|
2020-08-21 07:10:40 -05:00
|
|
|
nodes map[common.Hash][]byte // In-memory membatch of recently completed nodes
|
|
|
|
codes map[common.Hash][]byte // In-memory membatch of recently completed codes
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// newSyncMemBatch allocates a new memory-buffer for not-yet persisted trie nodes.
|
|
|
|
func newSyncMemBatch() *syncMemBatch {
|
|
|
|
return &syncMemBatch{
|
2020-08-21 07:10:40 -05:00
|
|
|
nodes: make(map[common.Hash][]byte),
|
|
|
|
codes: make(map[common.Hash][]byte),
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-21 07:10:40 -05:00
|
|
|
// hasNode reports the trie node with specific hash is already cached.
|
|
|
|
func (batch *syncMemBatch) hasNode(hash common.Hash) bool {
|
|
|
|
_, ok := batch.nodes[hash]
|
|
|
|
return ok
|
|
|
|
}
|
|
|
|
|
|
|
|
// hasCode reports the contract code with specific hash is already cached.
|
|
|
|
func (batch *syncMemBatch) hasCode(hash common.Hash) bool {
|
|
|
|
_, ok := batch.codes[hash]
|
|
|
|
return ok
|
|
|
|
}
|
|
|
|
|
2018-05-29 10:48:43 -05:00
|
|
|
// Sync is the main state trie synchronisation scheduler, which provides yet
|
2015-10-05 11:37:56 -05:00
|
|
|
// unknown trie hashes to retrieve, accepts node data associated with said hashes
|
2015-10-07 04:14:30 -05:00
|
|
|
// and reconstructs the trie step by step until all is done.
|
2018-05-29 10:48:43 -05:00
|
|
|
type Sync struct {
|
all: integrate the freezer with fast sync
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
2019-04-25 09:59:48 -05:00
|
|
|
database ethdb.KeyValueReader // Persistent database to check for existing entries
|
2018-08-24 13:08:48 -05:00
|
|
|
membatch *syncMemBatch // Memory buffer to avoid frequent database writes
|
2020-08-21 07:10:40 -05:00
|
|
|
nodeReqs map[common.Hash]*request // Pending requests pertaining to a trie node hash
|
|
|
|
codeReqs map[common.Hash]*request // Pending requests pertaining to a code hash
|
2015-10-05 11:37:56 -05:00
|
|
|
queue *prque.Prque // Priority queue with the pending requests
|
2020-08-26 05:05:06 -05:00
|
|
|
fetches map[int]int // Number of active fetches per trie node depth
|
2020-08-21 07:10:40 -05:00
|
|
|
bloom *SyncBloom // Bloom filter for fast state existence checks
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
|
2018-05-29 10:48:43 -05:00
|
|
|
// NewSync creates a new trie data download scheduler.
|
all: integrate the freezer with fast sync
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
2019-04-25 09:59:48 -05:00
|
|
|
func NewSync(root common.Hash, database ethdb.KeyValueReader, callback LeafCallback, bloom *SyncBloom) *Sync {
|
2018-05-29 10:48:43 -05:00
|
|
|
ts := &Sync{
|
2015-10-05 11:37:56 -05:00
|
|
|
database: database,
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
membatch: newSyncMemBatch(),
|
2020-08-21 07:10:40 -05:00
|
|
|
nodeReqs: make(map[common.Hash]*request),
|
|
|
|
codeReqs: make(map[common.Hash]*request),
|
2018-09-03 10:33:21 -05:00
|
|
|
queue: prque.New(nil),
|
2020-08-26 05:05:06 -05:00
|
|
|
fetches: make(map[int]int),
|
2019-05-13 07:28:01 -05:00
|
|
|
bloom: bloom,
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
2020-08-26 05:05:06 -05:00
|
|
|
ts.AddSubTrie(root, nil, common.Hash{}, callback)
|
2015-10-05 11:37:56 -05:00
|
|
|
return ts
|
|
|
|
}
|
|
|
|
|
|
|
|
// AddSubTrie registers a new trie to the sync code, rooted at the designated parent.
|
2020-08-26 05:05:06 -05:00
|
|
|
func (s *Sync) AddSubTrie(root common.Hash, path []byte, parent common.Hash, callback LeafCallback) {
|
2015-10-07 04:14:30 -05:00
|
|
|
// Short circuit if the trie is empty or already known
|
2015-10-05 11:37:56 -05:00
|
|
|
if root == emptyRoot {
|
|
|
|
return
|
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
if s.membatch.hasNode(root) {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
return
|
|
|
|
}
|
2020-08-20 05:01:24 -05:00
|
|
|
if s.bloom == nil || s.bloom.Contains(root[:]) {
|
2020-08-21 07:10:40 -05:00
|
|
|
// Bloom filter says this might be a duplicate, double check.
|
|
|
|
// If database says yes, then at least the trie node is present
|
|
|
|
// and we hold the assumption that it's NOT legacy contract code.
|
|
|
|
blob := rawdb.ReadTrieNode(s.database, root)
|
|
|
|
if len(blob) > 0 {
|
2019-05-13 07:28:01 -05:00
|
|
|
return
|
|
|
|
}
|
|
|
|
// False positive, bump fault meter
|
|
|
|
bloomFaultMeter.Mark(1)
|
2015-10-07 04:14:30 -05:00
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
// Assemble the new sub-trie sync request
|
|
|
|
req := &request{
|
2020-08-26 05:05:06 -05:00
|
|
|
path: path,
|
2015-10-05 11:37:56 -05:00
|
|
|
hash: root,
|
|
|
|
callback: callback,
|
|
|
|
}
|
|
|
|
// If this sub-trie has a designated parent, link them together
|
|
|
|
if parent != (common.Hash{}) {
|
2020-08-21 07:10:40 -05:00
|
|
|
ancestor := s.nodeReqs[parent]
|
2015-10-05 11:37:56 -05:00
|
|
|
if ancestor == nil {
|
|
|
|
panic(fmt.Sprintf("sub-trie ancestor not found: %x", parent))
|
|
|
|
}
|
|
|
|
ancestor.deps++
|
|
|
|
req.parents = append(req.parents, ancestor)
|
|
|
|
}
|
|
|
|
s.schedule(req)
|
|
|
|
}
|
|
|
|
|
2020-08-21 07:10:40 -05:00
|
|
|
// AddCodeEntry schedules the direct retrieval of a contract code that should not
|
|
|
|
// be interpreted as a trie node, but rather accepted and stored into the database
|
|
|
|
// as is.
|
2020-08-26 05:05:06 -05:00
|
|
|
func (s *Sync) AddCodeEntry(hash common.Hash, path []byte, parent common.Hash) {
|
2015-10-07 04:14:30 -05:00
|
|
|
// Short circuit if the entry is empty or already known
|
|
|
|
if hash == emptyState {
|
|
|
|
return
|
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
if s.membatch.hasCode(hash) {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
return
|
|
|
|
}
|
2020-08-20 05:01:24 -05:00
|
|
|
if s.bloom == nil || s.bloom.Contains(hash[:]) {
|
2020-08-21 07:10:40 -05:00
|
|
|
// Bloom filter says this might be a duplicate, double check.
|
|
|
|
// If database says yes, the blob is present for sure.
|
|
|
|
// Note we only check the existence with new code scheme, fast
|
|
|
|
// sync is expected to run with a fresh new node. Even there
|
|
|
|
// exists the code with legacy format, fetch and store with
|
|
|
|
// new scheme anyway.
|
|
|
|
if blob := rawdb.ReadCodeWithPrefix(s.database, hash); len(blob) > 0 {
|
2019-05-13 07:28:01 -05:00
|
|
|
return
|
|
|
|
}
|
|
|
|
// False positive, bump fault meter
|
|
|
|
bloomFaultMeter.Mark(1)
|
2015-10-07 04:14:30 -05:00
|
|
|
}
|
|
|
|
// Assemble the new sub-trie sync request
|
|
|
|
req := &request{
|
2020-08-26 05:05:06 -05:00
|
|
|
path: path,
|
|
|
|
hash: hash,
|
|
|
|
code: true,
|
2015-10-07 04:14:30 -05:00
|
|
|
}
|
|
|
|
// If this sub-trie has a designated parent, link them together
|
|
|
|
if parent != (common.Hash{}) {
|
2020-08-21 07:10:40 -05:00
|
|
|
ancestor := s.nodeReqs[parent] // the parent of codereq can ONLY be nodereq
|
2015-10-07 04:14:30 -05:00
|
|
|
if ancestor == nil {
|
|
|
|
panic(fmt.Sprintf("raw-entry ancestor not found: %x", parent))
|
|
|
|
}
|
|
|
|
ancestor.deps++
|
|
|
|
req.parents = append(req.parents, ancestor)
|
|
|
|
}
|
|
|
|
s.schedule(req)
|
|
|
|
}
|
|
|
|
|
2020-08-28 02:50:37 -05:00
|
|
|
// Missing retrieves the known missing nodes from the trie for retrieval. To aid
|
|
|
|
// both eth/6x style fast sync and snap/1x style state sync, the paths of trie
|
|
|
|
// nodes are returned too, as well as separate hash list for codes.
|
|
|
|
func (s *Sync) Missing(max int) (nodes []common.Hash, paths []SyncPath, codes []common.Hash) {
|
|
|
|
var (
|
|
|
|
nodeHashes []common.Hash
|
|
|
|
nodePaths []SyncPath
|
|
|
|
codeHashes []common.Hash
|
|
|
|
)
|
|
|
|
for !s.queue.Empty() && (max == 0 || len(nodeHashes)+len(codeHashes) < max) {
|
2020-08-26 05:05:06 -05:00
|
|
|
// Retrieve th enext item in line
|
|
|
|
item, prio := s.queue.Peek()
|
|
|
|
|
|
|
|
// If we have too many already-pending tasks for this depth, throttle
|
|
|
|
depth := int(prio >> 56)
|
|
|
|
if s.fetches[depth] > maxFetchesPerDepth {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
// Item is allowed to be scheduled, add it to the task list
|
|
|
|
s.queue.Pop()
|
|
|
|
s.fetches[depth]++
|
2020-08-28 02:50:37 -05:00
|
|
|
|
|
|
|
hash := item.(common.Hash)
|
|
|
|
if req, ok := s.nodeReqs[hash]; ok {
|
|
|
|
nodeHashes = append(nodeHashes, hash)
|
|
|
|
nodePaths = append(nodePaths, newSyncPath(req.path))
|
|
|
|
} else {
|
|
|
|
codeHashes = append(codeHashes, hash)
|
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
2020-08-28 02:50:37 -05:00
|
|
|
return nodeHashes, nodePaths, codeHashes
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
|
2020-08-21 07:10:40 -05:00
|
|
|
// Process injects the received data for requested item. Note it can
|
|
|
|
// happpen that the single response commits two pending requests(e.g.
|
|
|
|
// there are two requests one for code and one for node but the hash
|
|
|
|
// is same). In this case the second response for the same hash will
|
|
|
|
// be treated as "non-requested" item or "already-processed" item but
|
|
|
|
// there is no downside.
|
|
|
|
func (s *Sync) Process(result SyncResult) error {
|
|
|
|
// If the item was not requested either for code or node, bail out
|
|
|
|
if s.nodeReqs[result.Hash] == nil && s.codeReqs[result.Hash] == nil {
|
|
|
|
return ErrNotRequested
|
|
|
|
}
|
|
|
|
// There is an pending code request for this data, commit directly
|
|
|
|
var filled bool
|
|
|
|
if req := s.codeReqs[result.Hash]; req != nil && req.data == nil {
|
|
|
|
filled = true
|
|
|
|
req.data = result.Data
|
|
|
|
s.commit(req)
|
|
|
|
}
|
|
|
|
// There is an pending node request for this data, fill it.
|
|
|
|
if req := s.nodeReqs[result.Hash]; req != nil && req.data == nil {
|
|
|
|
filled = true
|
2015-10-05 11:37:56 -05:00
|
|
|
// Decode the node data content and update the request
|
2020-08-21 07:10:40 -05:00
|
|
|
node, err := decodeNode(result.Hash[:], result.Data)
|
2015-10-05 11:37:56 -05:00
|
|
|
if err != nil {
|
2020-08-21 07:10:40 -05:00
|
|
|
return err
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
req.data = result.Data
|
2015-10-05 11:37:56 -05:00
|
|
|
|
|
|
|
// Create and schedule a request for all the children nodes
|
2020-08-21 07:10:40 -05:00
|
|
|
requests, err := s.children(req, node)
|
2015-10-05 11:37:56 -05:00
|
|
|
if err != nil {
|
2020-08-21 07:10:40 -05:00
|
|
|
return err
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
if len(requests) == 0 && req.deps == 0 {
|
|
|
|
s.commit(req)
|
|
|
|
} else {
|
|
|
|
req.deps += len(requests)
|
|
|
|
for _, child := range requests {
|
|
|
|
s.schedule(child)
|
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
if !filled {
|
|
|
|
return ErrAlreadyProcessed
|
|
|
|
}
|
|
|
|
return nil
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// Commit flushes the data stored in the internal membatch out to persistent
|
2019-10-28 12:50:11 -05:00
|
|
|
// storage, returning any occurred error.
|
|
|
|
func (s *Sync) Commit(dbw ethdb.Batch) error {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// Dump the membatch into a database dbw
|
2020-08-21 07:10:40 -05:00
|
|
|
for key, value := range s.membatch.nodes {
|
|
|
|
rawdb.WriteTrieNode(dbw, key, value)
|
|
|
|
s.bloom.Add(key[:])
|
|
|
|
}
|
|
|
|
for key, value := range s.membatch.codes {
|
|
|
|
rawdb.WriteCode(dbw, key, value)
|
2019-05-13 07:28:01 -05:00
|
|
|
s.bloom.Add(key[:])
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
}
|
|
|
|
// Drop the membatch data and return
|
|
|
|
s.membatch = newSyncMemBatch()
|
2019-10-28 12:50:11 -05:00
|
|
|
return nil
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
}
|
|
|
|
|
2015-10-07 04:14:30 -05:00
|
|
|
// Pending returns the number of state entries currently pending for download.
|
2018-05-29 10:48:43 -05:00
|
|
|
func (s *Sync) Pending() int {
|
2020-08-21 07:10:40 -05:00
|
|
|
return len(s.nodeReqs) + len(s.codeReqs)
|
2015-10-07 04:14:30 -05:00
|
|
|
}
|
|
|
|
|
2015-10-05 11:37:56 -05:00
|
|
|
// schedule inserts a new state retrieval request into the fetch queue. If there
|
|
|
|
// is already a pending request for this node, the new request will be discarded
|
|
|
|
// and only a parent reference added to the old one.
|
2018-05-29 10:48:43 -05:00
|
|
|
func (s *Sync) schedule(req *request) {
|
2020-08-21 07:10:40 -05:00
|
|
|
var reqset = s.nodeReqs
|
|
|
|
if req.code {
|
|
|
|
reqset = s.codeReqs
|
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
// If we're already requesting this node, add a new reference and stop
|
2020-08-21 07:10:40 -05:00
|
|
|
if old, ok := reqset[req.hash]; ok {
|
2015-10-05 11:37:56 -05:00
|
|
|
old.parents = append(old.parents, req.parents...)
|
|
|
|
return
|
|
|
|
}
|
2020-08-21 07:10:40 -05:00
|
|
|
reqset[req.hash] = req
|
|
|
|
|
|
|
|
// Schedule the request for future retrieval. This queue is shared
|
|
|
|
// by both node requests and code requests. It can happen that there
|
|
|
|
// is a trie node and code has same hash. In this case two elements
|
|
|
|
// with same hash and same or different depth will be pushed. But it's
|
|
|
|
// ok the worst case is the second response will be treated as duplicated.
|
2020-08-26 05:05:06 -05:00
|
|
|
prio := int64(len(req.path)) << 56 // depth >= 128 will never happen, storage leaves will be included in their parents
|
|
|
|
for i := 0; i < 14 && i < len(req.path); i++ {
|
|
|
|
prio |= int64(15-req.path[i]) << (52 - i*4) // 15-nibble => lexicographic order
|
|
|
|
}
|
|
|
|
s.queue.Push(req.hash, prio)
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// children retrieves all the missing children of a state trie entry for future
|
|
|
|
// retrieval scheduling.
|
2018-05-29 10:48:43 -05:00
|
|
|
func (s *Sync) children(req *request, object node) ([]*request, error) {
|
2015-10-05 11:37:56 -05:00
|
|
|
// Gather all the children of the node, irrelevant whether known or not
|
|
|
|
type child struct {
|
2020-08-26 05:05:06 -05:00
|
|
|
path []byte
|
|
|
|
node node
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
2019-02-19 07:50:11 -06:00
|
|
|
var children []child
|
2015-10-05 11:37:56 -05:00
|
|
|
|
2016-10-21 10:34:17 -05:00
|
|
|
switch node := (object).(type) {
|
2016-10-14 11:04:33 -05:00
|
|
|
case *shortNode:
|
2020-08-28 02:50:37 -05:00
|
|
|
key := node.Key
|
|
|
|
if hasTerm(key) {
|
|
|
|
key = key[:len(key)-1]
|
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
children = []child{{
|
2020-08-26 05:05:06 -05:00
|
|
|
node: node.Val,
|
2020-08-28 02:50:37 -05:00
|
|
|
path: append(append([]byte(nil), req.path...), key...),
|
2015-10-05 11:37:56 -05:00
|
|
|
}}
|
2016-10-14 11:04:33 -05:00
|
|
|
case *fullNode:
|
2015-10-05 11:37:56 -05:00
|
|
|
for i := 0; i < 17; i++ {
|
2016-05-19 05:24:14 -05:00
|
|
|
if node.Children[i] != nil {
|
2015-10-05 11:37:56 -05:00
|
|
|
children = append(children, child{
|
2020-08-26 05:05:06 -05:00
|
|
|
node: node.Children[i],
|
|
|
|
path: append(append([]byte(nil), req.path...), byte(i)),
|
2015-10-05 11:37:56 -05:00
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
panic(fmt.Sprintf("unknown node: %+v", node))
|
|
|
|
}
|
|
|
|
// Iterate over the children, and request all unknown ones
|
|
|
|
requests := make([]*request, 0, len(children))
|
|
|
|
for _, child := range children {
|
|
|
|
// Notify any external watcher of a new key/value node
|
|
|
|
if req.callback != nil {
|
2016-10-21 10:34:17 -05:00
|
|
|
if node, ok := (child.node).(valueNode); ok {
|
2020-08-28 02:50:37 -05:00
|
|
|
if err := req.callback(child.path, node, req.hash); err != nil {
|
2015-10-05 11:37:56 -05:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// If the child references another node, resolve or schedule
|
2016-10-21 10:34:17 -05:00
|
|
|
if node, ok := (child.node).(hashNode); ok {
|
2015-10-05 11:37:56 -05:00
|
|
|
// Try to resolve the node from the local database
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
hash := common.BytesToHash(node)
|
2020-08-21 07:10:40 -05:00
|
|
|
if s.membatch.hasNode(hash) {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
continue
|
|
|
|
}
|
2020-08-20 05:01:24 -05:00
|
|
|
if s.bloom == nil || s.bloom.Contains(node) {
|
2020-08-21 07:10:40 -05:00
|
|
|
// Bloom filter says this might be a duplicate, double check.
|
|
|
|
// If database says yes, then at least the trie node is present
|
|
|
|
// and we hold the assumption that it's NOT legacy contract code.
|
2021-01-12 10:39:31 -06:00
|
|
|
if blob := rawdb.ReadTrieNode(s.database, hash); len(blob) > 0 {
|
2019-05-13 07:28:01 -05:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
// False positive, bump fault meter
|
|
|
|
bloomFaultMeter.Mark(1)
|
2015-10-05 11:37:56 -05:00
|
|
|
}
|
|
|
|
// Locally unknown node, schedule for retrieval
|
|
|
|
requests = append(requests, &request{
|
2020-08-26 05:05:06 -05:00
|
|
|
path: child.path,
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
hash: hash,
|
2015-10-05 11:37:56 -05:00
|
|
|
parents: []*request{req},
|
|
|
|
callback: req.callback,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return requests, nil
|
|
|
|
}
|
|
|
|
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// commit finalizes a retrieval request and stores it into the membatch. If any
|
2015-10-05 11:37:56 -05:00
|
|
|
// of the referencing parent requests complete due to this commit, they are also
|
|
|
|
// committed themselves.
|
2018-05-29 10:48:43 -05:00
|
|
|
func (s *Sync) commit(req *request) (err error) {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
// Write the node content to the membatch
|
2020-08-21 07:10:40 -05:00
|
|
|
if req.code {
|
|
|
|
s.membatch.codes[req.hash] = req.data
|
|
|
|
delete(s.codeReqs, req.hash)
|
2020-08-26 05:05:06 -05:00
|
|
|
s.fetches[len(req.path)]--
|
2020-08-21 07:10:40 -05:00
|
|
|
} else {
|
|
|
|
s.membatch.nodes[req.hash] = req.data
|
|
|
|
delete(s.nodeReqs, req.hash)
|
2020-08-26 05:05:06 -05:00
|
|
|
s.fetches[len(req.path)]--
|
2020-08-21 07:10:40 -05:00
|
|
|
}
|
2015-10-05 11:37:56 -05:00
|
|
|
// Check all parents for completion
|
|
|
|
for _, parent := range req.parents {
|
|
|
|
parent.deps--
|
|
|
|
if parent.deps == 0 {
|
eth/downloader: separate state sync from queue (#14460)
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
2017-06-22 07:26:03 -05:00
|
|
|
if err := s.commit(parent); err != nil {
|
2015-10-05 11:37:56 -05:00
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|