2015-12-04 12:56:11 -06:00
|
|
|
// Copyright 2015 The go-ethereum Authors
|
|
|
|
// This file is part of the go-ethereum library.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
|
|
|
|
package node
|
|
|
|
|
|
|
|
import (
|
2017-09-25 03:08:07 -05:00
|
|
|
"context"
|
2015-12-04 12:56:11 -06:00
|
|
|
"fmt"
|
|
|
|
"strings"
|
|
|
|
|
2016-11-28 10:14:55 -06:00
|
|
|
"github.com/ethereum/go-ethereum/common/hexutil"
|
2015-12-16 03:58:01 -06:00
|
|
|
"github.com/ethereum/go-ethereum/crypto"
|
2020-08-03 12:40:46 -05:00
|
|
|
"github.com/ethereum/go-ethereum/internal/debug"
|
2021-03-23 04:41:23 -05:00
|
|
|
"github.com/ethereum/go-ethereum/log"
|
2015-12-04 12:56:11 -06:00
|
|
|
"github.com/ethereum/go-ethereum/p2p"
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 07:26:09 -05:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/discover"
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
2017-09-25 03:08:07 -05:00
|
|
|
"github.com/ethereum/go-ethereum/rpc"
|
2015-12-04 12:56:11 -06:00
|
|
|
)
|
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// apis returns the collection of built-in RPC APIs.
|
|
|
|
func (n *Node) apis() []rpc.API {
|
|
|
|
return []rpc.API{
|
|
|
|
{
|
|
|
|
Namespace: "admin",
|
2022-06-21 04:05:43 -05:00
|
|
|
Service: &adminAPI{n},
|
2020-08-03 12:40:46 -05:00
|
|
|
}, {
|
|
|
|
Namespace: "debug",
|
|
|
|
Service: debug.Handler,
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 07:26:09 -05:00
|
|
|
}, {
|
|
|
|
Namespace: "debug",
|
|
|
|
Service: &p2pDebugAPI{n},
|
2020-08-03 12:40:46 -05:00
|
|
|
}, {
|
|
|
|
Namespace: "web3",
|
2022-06-21 04:05:43 -05:00
|
|
|
Service: &web3API{n},
|
2020-08-03 12:40:46 -05:00
|
|
|
},
|
|
|
|
}
|
2015-12-04 12:56:11 -06:00
|
|
|
}
|
|
|
|
|
2022-06-21 04:05:43 -05:00
|
|
|
// adminAPI is the collection of administrative API methods exposed over
|
|
|
|
// both secure and unsecure RPC channels.
|
|
|
|
type adminAPI struct {
|
2020-08-03 12:40:46 -05:00
|
|
|
node *Node // Node interfaced by this API
|
2015-12-04 12:56:11 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
// AddPeer requests connecting to a remote node, and also maintaining the new
|
|
|
|
// connection at all times, even reconnecting if it is lost.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) AddPeer(url string) (bool, error) {
|
2015-12-04 12:56:11 -06:00
|
|
|
// Make sure the server is running, fail otherwise
|
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return false, ErrNodeStopped
|
|
|
|
}
|
|
|
|
// Try to add the url as a static peer and return
|
2019-06-07 08:31:00 -05:00
|
|
|
node, err := enode.Parse(enode.ValidSchemes, url)
|
2015-12-04 12:56:11 -06:00
|
|
|
if err != nil {
|
|
|
|
return false, fmt.Errorf("invalid enode: %v", err)
|
|
|
|
}
|
|
|
|
server.AddPeer(node)
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
2018-02-25 14:39:29 -06:00
|
|
|
// RemovePeer disconnects from a remote node if the connection exists
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) RemovePeer(url string) (bool, error) {
|
2016-06-24 15:27:55 -05:00
|
|
|
// Make sure the server is running, fail otherwise
|
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return false, ErrNodeStopped
|
|
|
|
}
|
|
|
|
// Try to remove the url as a static peer and return
|
2019-06-07 08:31:00 -05:00
|
|
|
node, err := enode.Parse(enode.ValidSchemes, url)
|
2016-06-24 15:27:55 -05:00
|
|
|
if err != nil {
|
|
|
|
return false, fmt.Errorf("invalid enode: %v", err)
|
|
|
|
}
|
|
|
|
server.RemovePeer(node)
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
2018-02-25 14:39:29 -06:00
|
|
|
// AddTrustedPeer allows a remote node to always connect, even if slots are full
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) AddTrustedPeer(url string) (bool, error) {
|
2018-02-25 14:39:29 -06:00
|
|
|
// Make sure the server is running, fail otherwise
|
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return false, ErrNodeStopped
|
|
|
|
}
|
2019-06-07 08:31:00 -05:00
|
|
|
node, err := enode.Parse(enode.ValidSchemes, url)
|
2018-02-25 14:39:29 -06:00
|
|
|
if err != nil {
|
|
|
|
return false, fmt.Errorf("invalid enode: %v", err)
|
|
|
|
}
|
|
|
|
server.AddTrustedPeer(node)
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// RemoveTrustedPeer removes a remote node from the trusted peer set, but it
|
|
|
|
// does not disconnect it automatically.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) RemoveTrustedPeer(url string) (bool, error) {
|
2018-02-25 14:39:29 -06:00
|
|
|
// Make sure the server is running, fail otherwise
|
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return false, ErrNodeStopped
|
|
|
|
}
|
2019-06-07 08:31:00 -05:00
|
|
|
node, err := enode.Parse(enode.ValidSchemes, url)
|
2018-02-25 14:39:29 -06:00
|
|
|
if err != nil {
|
|
|
|
return false, fmt.Errorf("invalid enode: %v", err)
|
|
|
|
}
|
|
|
|
server.RemoveTrustedPeer(node)
|
|
|
|
return true, nil
|
|
|
|
}
|
|
|
|
|
2017-09-25 03:08:07 -05:00
|
|
|
// PeerEvents creates an RPC subscription which receives peer events from the
|
|
|
|
// node's p2p.Server
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) PeerEvents(ctx context.Context) (*rpc.Subscription, error) {
|
2017-09-25 03:08:07 -05:00
|
|
|
// Make sure the server is running, fail otherwise
|
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return nil, ErrNodeStopped
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create the subscription
|
|
|
|
notifier, supported := rpc.NotifierFromContext(ctx)
|
|
|
|
if !supported {
|
|
|
|
return nil, rpc.ErrNotificationsUnsupported
|
|
|
|
}
|
|
|
|
rpcSub := notifier.CreateSubscription()
|
|
|
|
|
|
|
|
go func() {
|
|
|
|
events := make(chan *p2p.PeerEvent)
|
|
|
|
sub := server.SubscribeEvents(events)
|
|
|
|
defer sub.Unsubscribe()
|
|
|
|
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case event := <-events:
|
|
|
|
notifier.Notify(rpcSub.ID, event)
|
|
|
|
case <-sub.Err():
|
|
|
|
return
|
|
|
|
case <-rpcSub.Err():
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
return rpcSub, nil
|
|
|
|
}
|
|
|
|
|
2021-03-23 04:41:23 -05:00
|
|
|
// StartHTTP starts the HTTP RPC API server.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StartHTTP(host *string, port *int, cors *string, apis *string, vhosts *string) (bool, error) {
|
2016-02-05 05:45:36 -06:00
|
|
|
api.node.lock.Lock()
|
|
|
|
defer api.node.lock.Unlock()
|
2015-12-16 03:58:01 -06:00
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// Determine host and port.
|
2016-03-14 03:38:54 -05:00
|
|
|
if host == nil {
|
2016-09-16 04:53:50 -05:00
|
|
|
h := DefaultHTTPHost
|
2016-08-18 06:28:17 -05:00
|
|
|
if api.node.config.HTTPHost != "" {
|
|
|
|
h = api.node.config.HTTPHost
|
2016-05-04 06:40:07 -05:00
|
|
|
}
|
|
|
|
host = &h
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
|
|
|
if port == nil {
|
2016-12-17 08:39:55 -06:00
|
|
|
port = &api.node.config.HTTPPort
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
2017-04-12 16:04:14 -05:00
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// Determine config.
|
|
|
|
config := httpConfig{
|
|
|
|
CorsAllowedOrigins: api.node.config.HTTPCors,
|
|
|
|
Vhosts: api.node.config.HTTPVirtualHosts,
|
|
|
|
Modules: api.node.config.HTTPModules,
|
rpc: add limit for batch request items and response size (#26681)
This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches
were limited only by processing time. The server would pick calls from the batch and
answer them until the response timeout occurred, then stop processing the remaining batch
items.
Here, we are adding two additional limits which can be configured:
- the 'item limit': batches can have at most N items
- the 'response size limit': batches can contain at most X response bytes
These limits are optional in package rpc. In Geth, we set a default limit of 1000 items
and 25MB response size.
When a batch goes over the limit, an error response is returned to the client. However,
doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid
`id` can be responded to. Since batches may also contain non-call messages or
notifications, the best effort thing we can do to report an error with the batch itself is
reporting the limit violation as an error for the first method call in the batch. If a batch is
too large, but contains only notifications and responses, the error will be reported with
a null `id`.
The RPC client was also changed so it can deal with errors resulting from too large
batches. An older client connected to the server code in this PR could get stuck
until the request timeout occurred when the batch is too large. **Upgrading to a version
of the RPC client containing this change is strongly recommended to avoid timeout issues.**
For some weird reason, when writing the original client implementation, @fjl worked off of
the assumption that responses could be distributed across batches arbitrarily. So for a
batch request containing requests `[A B C]`, the server could respond with `[A B C]` but
also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the
client.
So in the implementation of BatchCallContext, the client waited for all requests in the
batch individually. If the server didn't respond to some of the requests in the batch, the
client would eventually just time out (if a context was used).
With the addition of batch limits into the server, we anticipate that people will hit this
kind of error way more often. To handle this properly, the client now waits for a single
response batch and expects it to contain all responses to the requests.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2023-06-13 06:38:58 -05:00
|
|
|
rpcEndpointConfig: rpcEndpointConfig{
|
|
|
|
batchItemLimit: api.node.config.BatchRequestLimit,
|
|
|
|
batchResponseSizeLimit: api.node.config.BatchResponseMaxSize,
|
|
|
|
},
|
2020-08-03 12:40:46 -05:00
|
|
|
}
|
2017-04-12 16:04:14 -05:00
|
|
|
if cors != nil {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.CorsAllowedOrigins = nil
|
2017-04-12 16:04:14 -05:00
|
|
|
for _, origin := range strings.Split(*cors, ",") {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.CorsAllowedOrigins = append(config.CorsAllowedOrigins, strings.TrimSpace(origin))
|
2017-04-12 16:04:14 -05:00
|
|
|
}
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
2018-02-12 06:52:07 -06:00
|
|
|
if vhosts != nil {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Vhosts = nil
|
2018-02-12 06:52:07 -06:00
|
|
|
for _, vhost := range strings.Split(*host, ",") {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Vhosts = append(config.Vhosts, strings.TrimSpace(vhost))
|
2018-02-12 06:52:07 -06:00
|
|
|
}
|
|
|
|
}
|
2016-03-14 03:38:54 -05:00
|
|
|
if apis != nil {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Modules = nil
|
2016-03-14 03:38:54 -05:00
|
|
|
for _, m := range strings.Split(*apis, ",") {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Modules = append(config.Modules, strings.TrimSpace(m))
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
if err := api.node.http.setListenAddr(*host, *port); err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
if err := api.node.http.enableRPC(api.node.rpcAPIs, config); err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
if err := api.node.http.start(); err != nil {
|
2016-02-05 05:45:36 -06:00
|
|
|
return false, err
|
2015-12-04 12:56:11 -06:00
|
|
|
}
|
2016-02-05 05:45:36 -06:00
|
|
|
return true, nil
|
2015-12-04 12:56:11 -06:00
|
|
|
}
|
|
|
|
|
2021-03-23 04:41:23 -05:00
|
|
|
// StartRPC starts the HTTP RPC API server.
|
2021-10-13 10:31:02 -05:00
|
|
|
// Deprecated: use StartHTTP instead.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StartRPC(host *string, port *int, cors *string, apis *string, vhosts *string) (bool, error) {
|
2021-03-23 04:41:23 -05:00
|
|
|
log.Warn("Deprecation warning", "method", "admin.StartRPC", "use-instead", "admin.StartHTTP")
|
|
|
|
return api.StartHTTP(host, port, cors, apis, vhosts)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopHTTP shuts down the HTTP server.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StopHTTP() (bool, error) {
|
2020-08-03 12:40:46 -05:00
|
|
|
api.node.http.stop()
|
2016-02-05 05:45:36 -06:00
|
|
|
return true, nil
|
2015-12-16 03:58:01 -06:00
|
|
|
}
|
|
|
|
|
2021-03-23 04:41:23 -05:00
|
|
|
// StopRPC shuts down the HTTP server.
|
2021-10-13 10:31:02 -05:00
|
|
|
// Deprecated: use StopHTTP instead.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StopRPC() (bool, error) {
|
2021-03-23 04:41:23 -05:00
|
|
|
log.Warn("Deprecation warning", "method", "admin.StopRPC", "use-instead", "admin.StopHTTP")
|
|
|
|
return api.StopHTTP()
|
|
|
|
}
|
|
|
|
|
2015-12-16 03:58:01 -06:00
|
|
|
// StartWS starts the websocket RPC API server.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StartWS(host *string, port *int, allowedOrigins *string, apis *string) (bool, error) {
|
2016-02-05 07:08:48 -06:00
|
|
|
api.node.lock.Lock()
|
|
|
|
defer api.node.lock.Unlock()
|
2015-12-16 03:58:01 -06:00
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// Determine host and port.
|
2016-03-14 03:38:54 -05:00
|
|
|
if host == nil {
|
2016-09-16 04:53:50 -05:00
|
|
|
h := DefaultWSHost
|
2016-08-18 06:28:17 -05:00
|
|
|
if api.node.config.WSHost != "" {
|
|
|
|
h = api.node.config.WSHost
|
2016-05-04 06:40:07 -05:00
|
|
|
}
|
|
|
|
host = &h
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
|
|
|
if port == nil {
|
2016-12-17 08:39:55 -06:00
|
|
|
port = &api.node.config.WSPort
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
2017-04-12 16:04:14 -05:00
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// Determine config.
|
|
|
|
config := wsConfig{
|
|
|
|
Modules: api.node.config.WSModules,
|
|
|
|
Origins: api.node.config.WSOrigins,
|
|
|
|
// ExposeAll: api.node.config.WSExposeAll,
|
rpc: add limit for batch request items and response size (#26681)
This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches
were limited only by processing time. The server would pick calls from the batch and
answer them until the response timeout occurred, then stop processing the remaining batch
items.
Here, we are adding two additional limits which can be configured:
- the 'item limit': batches can have at most N items
- the 'response size limit': batches can contain at most X response bytes
These limits are optional in package rpc. In Geth, we set a default limit of 1000 items
and 25MB response size.
When a batch goes over the limit, an error response is returned to the client. However,
doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid
`id` can be responded to. Since batches may also contain non-call messages or
notifications, the best effort thing we can do to report an error with the batch itself is
reporting the limit violation as an error for the first method call in the batch. If a batch is
too large, but contains only notifications and responses, the error will be reported with
a null `id`.
The RPC client was also changed so it can deal with errors resulting from too large
batches. An older client connected to the server code in this PR could get stuck
until the request timeout occurred when the batch is too large. **Upgrading to a version
of the RPC client containing this change is strongly recommended to avoid timeout issues.**
For some weird reason, when writing the original client implementation, @fjl worked off of
the assumption that responses could be distributed across batches arbitrarily. So for a
batch request containing requests `[A B C]`, the server could respond with `[A B C]` but
also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the
client.
So in the implementation of BatchCallContext, the client waited for all requests in the
batch individually. If the server didn't respond to some of the requests in the batch, the
client would eventually just time out (if a context was used).
With the addition of batch limits into the server, we anticipate that people will hit this
kind of error way more often. To handle this properly, the client now waits for a single
response batch and expects it to contain all responses to the requests.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
2023-06-13 06:38:58 -05:00
|
|
|
rpcEndpointConfig: rpcEndpointConfig{
|
|
|
|
batchItemLimit: api.node.config.BatchRequestLimit,
|
|
|
|
batchResponseSizeLimit: api.node.config.BatchResponseMaxSize,
|
|
|
|
},
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
|
|
|
if apis != nil {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Modules = nil
|
2016-03-14 03:38:54 -05:00
|
|
|
for _, m := range strings.Split(*apis, ",") {
|
2020-08-03 12:40:46 -05:00
|
|
|
config.Modules = append(config.Modules, strings.TrimSpace(m))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if allowedOrigins != nil {
|
|
|
|
config.Origins = nil
|
|
|
|
for _, origin := range strings.Split(*allowedOrigins, ",") {
|
|
|
|
config.Origins = append(config.Origins, strings.TrimSpace(origin))
|
2016-03-14 03:38:54 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// Enable WebSocket on the server.
|
2022-03-07 01:30:27 -06:00
|
|
|
server := api.node.wsServerForPort(*port, false)
|
2020-08-03 12:40:46 -05:00
|
|
|
if err := server.setListenAddr(*host, *port); err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
2022-10-06 07:01:04 -05:00
|
|
|
openApis, _ := api.node.getAPIs()
|
2022-03-07 01:30:27 -06:00
|
|
|
if err := server.enableWS(openApis, config); err != nil {
|
2016-02-05 07:08:48 -06:00
|
|
|
return false, err
|
2015-12-16 03:58:01 -06:00
|
|
|
}
|
2020-08-03 12:40:46 -05:00
|
|
|
if err := server.start(); err != nil {
|
|
|
|
return false, err
|
|
|
|
}
|
|
|
|
api.node.http.log.Info("WebSocket endpoint opened", "url", api.node.WSEndpoint())
|
2016-02-05 07:08:48 -06:00
|
|
|
return true, nil
|
2015-12-16 03:58:01 -06:00
|
|
|
}
|
|
|
|
|
2020-08-03 12:40:46 -05:00
|
|
|
// StopWS terminates all WebSocket servers.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) StopWS() (bool, error) {
|
2020-08-03 12:40:46 -05:00
|
|
|
api.node.http.stopWS()
|
|
|
|
api.node.ws.stop()
|
2016-02-05 07:08:48 -06:00
|
|
|
return true, nil
|
2015-12-04 12:56:11 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
// Peers retrieves all the information we know about each individual peer at the
|
|
|
|
// protocol granularity.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) Peers() ([]*p2p.PeerInfo, error) {
|
2015-12-04 12:56:11 -06:00
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return nil, ErrNodeStopped
|
|
|
|
}
|
|
|
|
return server.PeersInfo(), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// NodeInfo retrieves all the information we know about the host node at the
|
|
|
|
// protocol granularity.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) NodeInfo() (*p2p.NodeInfo, error) {
|
2015-12-04 12:56:11 -06:00
|
|
|
server := api.node.Server()
|
|
|
|
if server == nil {
|
|
|
|
return nil, ErrNodeStopped
|
|
|
|
}
|
|
|
|
return server.NodeInfo(), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Datadir retrieves the current data directory the node is using.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (api *adminAPI) Datadir() string {
|
2015-12-04 12:56:11 -06:00
|
|
|
return api.node.DataDir()
|
|
|
|
}
|
|
|
|
|
2022-06-21 04:05:43 -05:00
|
|
|
// web3API offers helper utils
|
|
|
|
type web3API struct {
|
2015-12-16 03:58:01 -06:00
|
|
|
stack *Node
|
|
|
|
}
|
|
|
|
|
|
|
|
// ClientVersion returns the node name
|
2022-06-21 04:05:43 -05:00
|
|
|
func (s *web3API) ClientVersion() string {
|
2015-12-16 03:58:01 -06:00
|
|
|
return s.stack.Server().Name
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sha3 applies the ethereum sha3 implementation on the input.
|
|
|
|
// It assumes the input is hex encoded.
|
2022-06-21 04:05:43 -05:00
|
|
|
func (s *web3API) Sha3(input hexutil.Bytes) hexutil.Bytes {
|
2016-11-28 10:14:55 -06:00
|
|
|
return crypto.Keccak256(input)
|
2015-12-16 03:58:01 -06:00
|
|
|
}
|
p2p/discover: improved node revalidation (#29572)
Node discovery periodically revalidates the nodes in its table by sending PING, checking
if they are still alive. I recently noticed some issues with the implementation of this
process, which can cause strange results such as nodes dropping unexpectedly, certain
nodes not getting revalidated often enough, and bad results being returned to incoming
FINDNODE queries.
In this change, the revalidation process is improved with the following logic:
- We maintain two 'revalidation lists' containing the table nodes, named 'fast' and 'slow'.
- The process chooses random nodes from each list on a randomized interval, the interval being
faster for the 'fast' list, and performs revalidation for the chosen node.
- Whenever a node is newly inserted into the table, it goes into the 'fast' list.
Once validation passes, it transfers to the 'slow' list. If a request fails, or the
node changes endpoint, it transfers back into 'fast'.
- livenessChecks is incremented by one for successful checks. Unlike the old implementation,
we will not drop the node on the first failing check. We instead quickly decay the
livenessChecks give it another chance.
- Order of nodes in bucket doesn't matter anymore.
I am also adding a debug API endpoint to dump the node table content.
Co-authored-by: Martin HS <martin@swende.se>
2024-05-23 07:26:09 -05:00
|
|
|
|
|
|
|
// p2pDebugAPI provides access to p2p internals for debugging.
|
|
|
|
type p2pDebugAPI struct {
|
|
|
|
stack *Node
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *p2pDebugAPI) DiscoveryV4Table() [][]discover.BucketNode {
|
|
|
|
disc := s.stack.server.DiscoveryV4()
|
|
|
|
if disc != nil {
|
|
|
|
return disc.TableBuckets()
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|