2017-09-25 03:08:07 -05:00
|
|
|
// Copyright 2017 The go-ethereum Authors
|
|
|
|
// This file is part of the go-ethereum library.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is free software: you can redistribute it and/or modify
|
|
|
|
// it under the terms of the GNU Lesser General Public License as published by
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
// (at your option) any later version.
|
|
|
|
//
|
|
|
|
// The go-ethereum library is distributed in the hope that it will be useful,
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
// GNU Lesser General Public License for more details.
|
|
|
|
//
|
|
|
|
// You should have received a copy of the GNU Lesser General Public License
|
|
|
|
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
|
|
|
|
package simulations
|
|
|
|
|
|
|
|
import (
|
|
|
|
"bufio"
|
|
|
|
"bytes"
|
|
|
|
"context"
|
|
|
|
"encoding/json"
|
2022-06-07 10:27:21 -05:00
|
|
|
"errors"
|
2017-09-25 03:08:07 -05:00
|
|
|
"fmt"
|
2022-05-05 12:44:36 -05:00
|
|
|
"html"
|
2017-09-25 03:08:07 -05:00
|
|
|
"io"
|
|
|
|
"net/http"
|
|
|
|
"strconv"
|
|
|
|
"strings"
|
2017-12-12 12:10:41 -06:00
|
|
|
"sync"
|
2017-09-25 03:08:07 -05:00
|
|
|
|
|
|
|
"github.com/ethereum/go-ethereum/event"
|
|
|
|
"github.com/ethereum/go-ethereum/p2p"
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/enode"
|
2017-09-25 03:08:07 -05:00
|
|
|
"github.com/ethereum/go-ethereum/p2p/simulations/adapters"
|
|
|
|
"github.com/ethereum/go-ethereum/rpc"
|
2019-11-18 02:40:59 -06:00
|
|
|
"github.com/gorilla/websocket"
|
2017-09-25 03:08:07 -05:00
|
|
|
"github.com/julienschmidt/httprouter"
|
|
|
|
)
|
|
|
|
|
|
|
|
// DefaultClient is the default simulation API client which expects the API
|
|
|
|
// to be running at http://localhost:8888
|
|
|
|
var DefaultClient = NewClient("http://localhost:8888")
|
|
|
|
|
|
|
|
// Client is a client for the simulation HTTP API which supports creating
|
|
|
|
// and managing simulation networks
|
|
|
|
type Client struct {
|
|
|
|
URL string
|
|
|
|
|
|
|
|
client *http.Client
|
|
|
|
}
|
|
|
|
|
|
|
|
// NewClient returns a new simulation API client
|
|
|
|
func NewClient(url string) *Client {
|
|
|
|
return &Client{
|
|
|
|
URL: url,
|
|
|
|
client: http.DefaultClient,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNetwork returns details of the network
|
|
|
|
func (c *Client) GetNetwork() (*Network, error) {
|
|
|
|
network := &Network{}
|
|
|
|
return network, c.Get("/", network)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StartNetwork starts all existing nodes in the simulation network
|
|
|
|
func (c *Client) StartNetwork() error {
|
|
|
|
return c.Post("/start", nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopNetwork stops all existing nodes in a simulation network
|
|
|
|
func (c *Client) StopNetwork() error {
|
|
|
|
return c.Post("/stop", nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// CreateSnapshot creates a network snapshot
|
|
|
|
func (c *Client) CreateSnapshot() (*Snapshot, error) {
|
|
|
|
snap := &Snapshot{}
|
|
|
|
return snap, c.Get("/snapshot", snap)
|
|
|
|
}
|
|
|
|
|
|
|
|
// LoadSnapshot loads a snapshot into the network
|
|
|
|
func (c *Client) LoadSnapshot(snap *Snapshot) error {
|
|
|
|
return c.Post("/snapshot", snap, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// SubscribeOpts is a collection of options to use when subscribing to network
|
|
|
|
// events
|
|
|
|
type SubscribeOpts struct {
|
|
|
|
// Current instructs the server to send events for existing nodes and
|
|
|
|
// connections first
|
|
|
|
Current bool
|
|
|
|
|
|
|
|
// Filter instructs the server to only send a subset of message events
|
|
|
|
Filter string
|
|
|
|
}
|
|
|
|
|
|
|
|
// SubscribeNetwork subscribes to network events which are sent from the server
|
|
|
|
// as a server-sent-events stream, optionally receiving events for existing
|
|
|
|
// nodes and connections and filtering message events
|
|
|
|
func (c *Client) SubscribeNetwork(events chan *Event, opts SubscribeOpts) (event.Subscription, error) {
|
|
|
|
url := fmt.Sprintf("%s/events?current=%t&filter=%s", c.URL, opts.Current, opts.Filter)
|
|
|
|
req, err := http.NewRequest("GET", url, nil)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
req.Header.Set("Accept", "text/event-stream")
|
|
|
|
res, err := c.client.Do(req)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if res.StatusCode != http.StatusOK {
|
2022-05-16 04:59:35 -05:00
|
|
|
response, _ := io.ReadAll(res.Body)
|
2017-09-25 03:08:07 -05:00
|
|
|
res.Body.Close()
|
|
|
|
return nil, fmt.Errorf("unexpected HTTP status: %s: %s", res.Status, response)
|
|
|
|
}
|
|
|
|
|
|
|
|
// define a producer function to pass to event.Subscription
|
|
|
|
// which reads server-sent events from res.Body and sends
|
|
|
|
// them to the events channel
|
|
|
|
producer := func(stop <-chan struct{}) error {
|
|
|
|
defer res.Body.Close()
|
|
|
|
|
|
|
|
// read lines from res.Body in a goroutine so that we are
|
|
|
|
// always reading from the stop channel
|
|
|
|
lines := make(chan string)
|
|
|
|
errC := make(chan error, 1)
|
|
|
|
go func() {
|
|
|
|
s := bufio.NewScanner(res.Body)
|
|
|
|
for s.Scan() {
|
|
|
|
select {
|
|
|
|
case lines <- s.Text():
|
|
|
|
case <-stop:
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
errC <- s.Err()
|
|
|
|
}()
|
|
|
|
|
|
|
|
// detect any lines which start with "data:", decode the data
|
|
|
|
// into an event and send it to the events channel
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case line := <-lines:
|
|
|
|
if !strings.HasPrefix(line, "data:") {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
data := strings.TrimSpace(strings.TrimPrefix(line, "data:"))
|
|
|
|
event := &Event{}
|
|
|
|
if err := json.Unmarshal([]byte(data), event); err != nil {
|
|
|
|
return fmt.Errorf("error decoding SSE event: %s", err)
|
|
|
|
}
|
|
|
|
select {
|
|
|
|
case events <- event:
|
|
|
|
case <-stop:
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
case err := <-errC:
|
|
|
|
return err
|
|
|
|
case <-stop:
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return event.NewSubscription(producer), nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNodes returns all nodes which exist in the network
|
|
|
|
func (c *Client) GetNodes() ([]*p2p.NodeInfo, error) {
|
|
|
|
var nodes []*p2p.NodeInfo
|
|
|
|
return nodes, c.Get("/nodes", &nodes)
|
|
|
|
}
|
|
|
|
|
|
|
|
// CreateNode creates a node in the network using the given configuration
|
|
|
|
func (c *Client) CreateNode(config *adapters.NodeConfig) (*p2p.NodeInfo, error) {
|
|
|
|
node := &p2p.NodeInfo{}
|
|
|
|
return node, c.Post("/nodes", config, node)
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNode returns details of a node
|
|
|
|
func (c *Client) GetNode(nodeID string) (*p2p.NodeInfo, error) {
|
|
|
|
node := &p2p.NodeInfo{}
|
|
|
|
return node, c.Get(fmt.Sprintf("/nodes/%s", nodeID), node)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StartNode starts a node
|
|
|
|
func (c *Client) StartNode(nodeID string) error {
|
|
|
|
return c.Post(fmt.Sprintf("/nodes/%s/start", nodeID), nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopNode stops a node
|
|
|
|
func (c *Client) StopNode(nodeID string) error {
|
|
|
|
return c.Post(fmt.Sprintf("/nodes/%s/stop", nodeID), nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// ConnectNode connects a node to a peer node
|
|
|
|
func (c *Client) ConnectNode(nodeID, peerID string) error {
|
|
|
|
return c.Post(fmt.Sprintf("/nodes/%s/conn/%s", nodeID, peerID), nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// DisconnectNode disconnects a node from a peer node
|
|
|
|
func (c *Client) DisconnectNode(nodeID, peerID string) error {
|
|
|
|
return c.Delete(fmt.Sprintf("/nodes/%s/conn/%s", nodeID, peerID))
|
|
|
|
}
|
|
|
|
|
|
|
|
// RPCClient returns an RPC client connected to a node
|
|
|
|
func (c *Client) RPCClient(ctx context.Context, nodeID string) (*rpc.Client, error) {
|
|
|
|
baseURL := strings.Replace(c.URL, "http", "ws", 1)
|
|
|
|
return rpc.DialWebsocket(ctx, fmt.Sprintf("%s/nodes/%s/rpc", baseURL, nodeID), "")
|
|
|
|
}
|
|
|
|
|
|
|
|
// Get performs a HTTP GET request decoding the resulting JSON response
|
|
|
|
// into "out"
|
|
|
|
func (c *Client) Get(path string, out interface{}) error {
|
|
|
|
return c.Send("GET", path, nil, out)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Post performs a HTTP POST request sending "in" as the JSON body and
|
|
|
|
// decoding the resulting JSON response into "out"
|
|
|
|
func (c *Client) Post(path string, in, out interface{}) error {
|
|
|
|
return c.Send("POST", path, in, out)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Delete performs a HTTP DELETE request
|
|
|
|
func (c *Client) Delete(path string) error {
|
|
|
|
return c.Send("DELETE", path, nil, nil)
|
|
|
|
}
|
|
|
|
|
|
|
|
// Send performs a HTTP request, sending "in" as the JSON request body and
|
|
|
|
// decoding the JSON response into "out"
|
|
|
|
func (c *Client) Send(method, path string, in, out interface{}) error {
|
|
|
|
var body []byte
|
|
|
|
if in != nil {
|
|
|
|
var err error
|
|
|
|
body, err = json.Marshal(in)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
req, err := http.NewRequest(method, c.URL+path, bytes.NewReader(body))
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
req.Header.Set("Content-Type", "application/json")
|
|
|
|
req.Header.Set("Accept", "application/json")
|
|
|
|
res, err := c.client.Do(req)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
defer res.Body.Close()
|
|
|
|
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusCreated {
|
2022-05-16 04:59:35 -05:00
|
|
|
response, _ := io.ReadAll(res.Body)
|
2017-09-25 03:08:07 -05:00
|
|
|
return fmt.Errorf("unexpected HTTP status: %s: %s", res.Status, response)
|
|
|
|
}
|
|
|
|
if out != nil {
|
|
|
|
if err := json.NewDecoder(res.Body).Decode(out); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Server is an HTTP server providing an API to manage a simulation network
|
|
|
|
type Server struct {
|
2017-12-12 12:10:41 -06:00
|
|
|
router *httprouter.Router
|
|
|
|
network *Network
|
|
|
|
mockerStop chan struct{} // when set, stops the current mocker
|
|
|
|
mockerMtx sync.Mutex // synchronises access to the mockerStop field
|
2017-09-25 03:08:07 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// NewServer returns a new simulation API server
|
|
|
|
func NewServer(network *Network) *Server {
|
|
|
|
s := &Server{
|
|
|
|
router: httprouter.New(),
|
|
|
|
network: network,
|
|
|
|
}
|
|
|
|
|
|
|
|
s.OPTIONS("/", s.Options)
|
|
|
|
s.GET("/", s.GetNetwork)
|
|
|
|
s.POST("/start", s.StartNetwork)
|
|
|
|
s.POST("/stop", s.StopNetwork)
|
2017-12-12 12:10:41 -06:00
|
|
|
s.POST("/mocker/start", s.StartMocker)
|
|
|
|
s.POST("/mocker/stop", s.StopMocker)
|
|
|
|
s.GET("/mocker", s.GetMockers)
|
|
|
|
s.POST("/reset", s.ResetNetwork)
|
2017-09-25 03:08:07 -05:00
|
|
|
s.GET("/events", s.StreamNetworkEvents)
|
|
|
|
s.GET("/snapshot", s.CreateSnapshot)
|
|
|
|
s.POST("/snapshot", s.LoadSnapshot)
|
|
|
|
s.POST("/nodes", s.CreateNode)
|
|
|
|
s.GET("/nodes", s.GetNodes)
|
|
|
|
s.GET("/nodes/:nodeid", s.GetNode)
|
|
|
|
s.POST("/nodes/:nodeid/start", s.StartNode)
|
|
|
|
s.POST("/nodes/:nodeid/stop", s.StopNode)
|
|
|
|
s.POST("/nodes/:nodeid/conn/:peerid", s.ConnectNode)
|
|
|
|
s.DELETE("/nodes/:nodeid/conn/:peerid", s.DisconnectNode)
|
|
|
|
s.GET("/nodes/:nodeid/rpc", s.NodeRPC)
|
|
|
|
|
|
|
|
return s
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNetwork returns details of the network
|
|
|
|
func (s *Server) GetNetwork(w http.ResponseWriter, req *http.Request) {
|
|
|
|
s.JSON(w, http.StatusOK, s.network)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StartNetwork starts all nodes in the network
|
|
|
|
func (s *Server) StartNetwork(w http.ResponseWriter, req *http.Request) {
|
|
|
|
if err := s.network.StartAll(); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopNetwork stops all nodes in the network
|
|
|
|
func (s *Server) StopNetwork(w http.ResponseWriter, req *http.Request) {
|
|
|
|
if err := s.network.StopAll(); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
2017-12-12 12:10:41 -06:00
|
|
|
// StartMocker starts the mocker node simulation
|
|
|
|
func (s *Server) StartMocker(w http.ResponseWriter, req *http.Request) {
|
|
|
|
s.mockerMtx.Lock()
|
|
|
|
defer s.mockerMtx.Unlock()
|
|
|
|
if s.mockerStop != nil {
|
|
|
|
http.Error(w, "mocker already running", http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
mockerType := req.FormValue("mocker-type")
|
|
|
|
mockerFn := LookupMocker(mockerType)
|
|
|
|
if mockerFn == nil {
|
2022-05-05 12:44:36 -05:00
|
|
|
http.Error(w, fmt.Sprintf("unknown mocker type %q", html.EscapeString(mockerType)), http.StatusBadRequest)
|
2017-12-12 12:10:41 -06:00
|
|
|
return
|
|
|
|
}
|
|
|
|
nodeCount, err := strconv.Atoi(req.FormValue("node-count"))
|
|
|
|
if err != nil {
|
|
|
|
http.Error(w, "invalid node-count provided", http.StatusBadRequest)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
s.mockerStop = make(chan struct{})
|
|
|
|
go mockerFn(s.network, s.mockerStop, nodeCount)
|
|
|
|
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopMocker stops the mocker node simulation
|
|
|
|
func (s *Server) StopMocker(w http.ResponseWriter, req *http.Request) {
|
|
|
|
s.mockerMtx.Lock()
|
|
|
|
defer s.mockerMtx.Unlock()
|
|
|
|
if s.mockerStop == nil {
|
|
|
|
http.Error(w, "stop channel not initialized", http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
close(s.mockerStop)
|
|
|
|
s.mockerStop = nil
|
|
|
|
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetMockerList returns a list of available mockers
|
|
|
|
func (s *Server) GetMockers(w http.ResponseWriter, req *http.Request) {
|
|
|
|
list := GetMockerList()
|
|
|
|
s.JSON(w, http.StatusOK, list)
|
|
|
|
}
|
|
|
|
|
|
|
|
// ResetNetwork resets all properties of a network to its initial (empty) state
|
|
|
|
func (s *Server) ResetNetwork(w http.ResponseWriter, req *http.Request) {
|
|
|
|
s.network.Reset()
|
|
|
|
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
2017-09-25 03:08:07 -05:00
|
|
|
// StreamNetworkEvents streams network events as a server-sent-events stream
|
|
|
|
func (s *Server) StreamNetworkEvents(w http.ResponseWriter, req *http.Request) {
|
|
|
|
events := make(chan *Event)
|
|
|
|
sub := s.network.events.Subscribe(events)
|
|
|
|
defer sub.Unsubscribe()
|
|
|
|
|
|
|
|
// write writes the given event and data to the stream like:
|
|
|
|
//
|
|
|
|
// event: <event>
|
|
|
|
// data: <data>
|
|
|
|
//
|
|
|
|
write := func(event, data string) {
|
|
|
|
fmt.Fprintf(w, "event: %s\n", event)
|
|
|
|
fmt.Fprintf(w, "data: %s\n\n", data)
|
|
|
|
if fw, ok := w.(http.Flusher); ok {
|
|
|
|
fw.Flush()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
writeEvent := func(event *Event) error {
|
|
|
|
data, err := json.Marshal(event)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
write("network", string(data))
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
writeErr := func(err error) {
|
|
|
|
write("error", err.Error())
|
|
|
|
}
|
|
|
|
|
|
|
|
// check if filtering has been requested
|
|
|
|
var filters MsgFilters
|
|
|
|
if filterParam := req.URL.Query().Get("filter"); filterParam != "" {
|
|
|
|
var err error
|
|
|
|
filters, err = NewMsgFilters(filterParam)
|
|
|
|
if err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
w.Header().Set("Content-Type", "text/event-stream; charset=utf-8")
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
fmt.Fprintf(w, "\n\n")
|
|
|
|
if fw, ok := w.(http.Flusher); ok {
|
|
|
|
fw.Flush()
|
|
|
|
}
|
|
|
|
|
|
|
|
// optionally send the existing nodes and connections
|
|
|
|
if req.URL.Query().Get("current") == "true" {
|
|
|
|
snap, err := s.network.Snapshot()
|
|
|
|
if err != nil {
|
|
|
|
writeErr(err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
for _, node := range snap.Nodes {
|
|
|
|
event := NewEvent(&node.Node)
|
|
|
|
if err := writeEvent(event); err != nil {
|
|
|
|
writeErr(err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
for _, conn := range snap.Conns {
|
2022-06-13 09:24:45 -05:00
|
|
|
conn := conn
|
2017-09-25 03:08:07 -05:00
|
|
|
event := NewEvent(&conn)
|
|
|
|
if err := writeEvent(event); err != nil {
|
|
|
|
writeErr(err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-19 10:16:42 -06:00
|
|
|
clientGone := req.Context().Done()
|
2017-09-25 03:08:07 -05:00
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case event := <-events:
|
|
|
|
// only send message events which match the filters
|
|
|
|
if event.Msg != nil && !filters.Match(event.Msg) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if err := writeEvent(event); err != nil {
|
|
|
|
writeErr(err)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
case <-clientGone:
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// NewMsgFilters constructs a collection of message filters from a URL query
|
|
|
|
// parameter.
|
|
|
|
//
|
|
|
|
// The parameter is expected to be a dash-separated list of individual filters,
|
|
|
|
// each having the format '<proto>:<codes>', where <proto> is the name of a
|
|
|
|
// protocol and <codes> is a comma-separated list of message codes.
|
|
|
|
//
|
|
|
|
// A message code of '*' or '-1' is considered a wildcard and matches any code.
|
|
|
|
func NewMsgFilters(filterParam string) (MsgFilters, error) {
|
|
|
|
filters := make(MsgFilters)
|
|
|
|
for _, filter := range strings.Split(filterParam, "-") {
|
|
|
|
protoCodes := strings.SplitN(filter, ":", 2)
|
|
|
|
if len(protoCodes) != 2 || protoCodes[0] == "" || protoCodes[1] == "" {
|
|
|
|
return nil, fmt.Errorf("invalid message filter: %s", filter)
|
|
|
|
}
|
|
|
|
proto := protoCodes[0]
|
|
|
|
for _, code := range strings.Split(protoCodes[1], ",") {
|
|
|
|
if code == "*" || code == "-1" {
|
|
|
|
filters[MsgFilter{Proto: proto, Code: -1}] = struct{}{}
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
n, err := strconv.ParseUint(code, 10, 64)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("invalid message code: %s", code)
|
|
|
|
}
|
|
|
|
filters[MsgFilter{Proto: proto, Code: int64(n)}] = struct{}{}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return filters, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// MsgFilters is a collection of filters which are used to filter message
|
|
|
|
// events
|
|
|
|
type MsgFilters map[MsgFilter]struct{}
|
|
|
|
|
|
|
|
// Match checks if the given message matches any of the filters
|
|
|
|
func (m MsgFilters) Match(msg *Msg) bool {
|
|
|
|
// check if there is a wildcard filter for the message's protocol
|
|
|
|
if _, ok := m[MsgFilter{Proto: msg.Protocol, Code: -1}]; ok {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
// check if there is a filter for the message's protocol and code
|
|
|
|
if _, ok := m[MsgFilter{Proto: msg.Protocol, Code: int64(msg.Code)}]; ok {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
// MsgFilter is used to filter message events based on protocol and message
|
|
|
|
// code
|
|
|
|
type MsgFilter struct {
|
|
|
|
// Proto is matched against a message's protocol
|
|
|
|
Proto string
|
|
|
|
|
|
|
|
// Code is matched against a message's code, with -1 matching all codes
|
|
|
|
Code int64
|
|
|
|
}
|
|
|
|
|
|
|
|
// CreateSnapshot creates a network snapshot
|
|
|
|
func (s *Server) CreateSnapshot(w http.ResponseWriter, req *http.Request) {
|
|
|
|
snap, err := s.network.Snapshot()
|
|
|
|
if err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, snap)
|
|
|
|
}
|
|
|
|
|
|
|
|
// LoadSnapshot loads a snapshot into the network
|
|
|
|
func (s *Server) LoadSnapshot(w http.ResponseWriter, req *http.Request) {
|
|
|
|
snap := &Snapshot{}
|
|
|
|
if err := json.NewDecoder(req.Body).Decode(snap); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
if err := s.network.Load(snap); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, s.network)
|
|
|
|
}
|
|
|
|
|
|
|
|
// CreateNode creates a node in the network using the given configuration
|
|
|
|
func (s *Server) CreateNode(w http.ResponseWriter, req *http.Request) {
|
2018-06-14 04:21:17 -05:00
|
|
|
config := &adapters.NodeConfig{}
|
|
|
|
|
2017-09-25 03:08:07 -05:00
|
|
|
err := json.NewDecoder(req.Body).Decode(config)
|
2022-06-07 10:27:21 -05:00
|
|
|
if err != nil && !errors.Is(err, io.EOF) {
|
2017-09-25 03:08:07 -05:00
|
|
|
http.Error(w, err.Error(), http.StatusBadRequest)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
node, err := s.network.NewNodeWithConfig(config)
|
|
|
|
if err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusCreated, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNodes returns all nodes which exist in the network
|
|
|
|
func (s *Server) GetNodes(w http.ResponseWriter, req *http.Request) {
|
|
|
|
nodes := s.network.GetNodes()
|
|
|
|
|
|
|
|
infos := make([]*p2p.NodeInfo, len(nodes))
|
|
|
|
for i, node := range nodes {
|
|
|
|
infos[i] = node.NodeInfo()
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, infos)
|
|
|
|
}
|
|
|
|
|
|
|
|
// GetNode returns details of a node
|
|
|
|
func (s *Server) GetNode(w http.ResponseWriter, req *http.Request) {
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// StartNode starts a node
|
|
|
|
func (s *Server) StartNode(w http.ResponseWriter, req *http.Request) {
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
|
|
|
|
if err := s.network.Start(node.ID()); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// StopNode stops a node
|
|
|
|
func (s *Server) StopNode(w http.ResponseWriter, req *http.Request) {
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
|
|
|
|
if err := s.network.Stop(node.ID()); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// ConnectNode connects a node to a peer node
|
|
|
|
func (s *Server) ConnectNode(w http.ResponseWriter, req *http.Request) {
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
peer := req.Context().Value("peer").(*Node)
|
|
|
|
|
|
|
|
if err := s.network.Connect(node.ID(), peer.ID()); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// DisconnectNode disconnects a node from a peer node
|
|
|
|
func (s *Server) DisconnectNode(w http.ResponseWriter, req *http.Request) {
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
peer := req.Context().Value("peer").(*Node)
|
|
|
|
|
|
|
|
if err := s.network.Disconnect(node.ID(), peer.ID()); err != nil {
|
|
|
|
http.Error(w, err.Error(), http.StatusInternalServerError)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
s.JSON(w, http.StatusOK, node.NodeInfo())
|
|
|
|
}
|
|
|
|
|
|
|
|
// Options responds to the OPTIONS HTTP method by returning a 200 OK response
|
|
|
|
// with the "Access-Control-Allow-Headers" header set to "Content-Type"
|
|
|
|
func (s *Server) Options(w http.ResponseWriter, req *http.Request) {
|
|
|
|
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")
|
|
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
}
|
|
|
|
|
2019-11-18 02:40:59 -06:00
|
|
|
var wsUpgrade = websocket.Upgrader{
|
|
|
|
CheckOrigin: func(*http.Request) bool { return true },
|
|
|
|
}
|
|
|
|
|
2017-09-25 03:08:07 -05:00
|
|
|
// NodeRPC forwards RPC requests to a node in the network via a WebSocket
|
|
|
|
// connection
|
|
|
|
func (s *Server) NodeRPC(w http.ResponseWriter, req *http.Request) {
|
2019-11-18 02:40:59 -06:00
|
|
|
conn, err := wsUpgrade.Upgrade(w, req, nil)
|
|
|
|
if err != nil {
|
|
|
|
return
|
2017-09-25 03:08:07 -05:00
|
|
|
}
|
2019-11-18 02:40:59 -06:00
|
|
|
defer conn.Close()
|
|
|
|
node := req.Context().Value("node").(*Node)
|
|
|
|
node.ServeRPC(conn)
|
2017-09-25 03:08:07 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// ServeHTTP implements the http.Handler interface by delegating to the
|
|
|
|
// underlying httprouter.Router
|
|
|
|
func (s *Server) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
|
|
|
s.router.ServeHTTP(w, req)
|
|
|
|
}
|
|
|
|
|
|
|
|
// GET registers a handler for GET requests to a particular path
|
|
|
|
func (s *Server) GET(path string, handle http.HandlerFunc) {
|
|
|
|
s.router.GET(path, s.wrapHandler(handle))
|
|
|
|
}
|
|
|
|
|
|
|
|
// POST registers a handler for POST requests to a particular path
|
|
|
|
func (s *Server) POST(path string, handle http.HandlerFunc) {
|
|
|
|
s.router.POST(path, s.wrapHandler(handle))
|
|
|
|
}
|
|
|
|
|
|
|
|
// DELETE registers a handler for DELETE requests to a particular path
|
|
|
|
func (s *Server) DELETE(path string, handle http.HandlerFunc) {
|
|
|
|
s.router.DELETE(path, s.wrapHandler(handle))
|
|
|
|
}
|
|
|
|
|
|
|
|
// OPTIONS registers a handler for OPTIONS requests to a particular path
|
|
|
|
func (s *Server) OPTIONS(path string, handle http.HandlerFunc) {
|
|
|
|
s.router.OPTIONS("/*path", s.wrapHandler(handle))
|
|
|
|
}
|
|
|
|
|
|
|
|
// JSON sends "data" as a JSON HTTP response
|
|
|
|
func (s *Server) JSON(w http.ResponseWriter, status int, data interface{}) {
|
|
|
|
w.Header().Set("Content-Type", "application/json")
|
|
|
|
w.WriteHeader(status)
|
|
|
|
json.NewEncoder(w).Encode(data)
|
|
|
|
}
|
|
|
|
|
2020-05-25 03:21:28 -05:00
|
|
|
// wrapHandler returns an httprouter.Handle which wraps an http.HandlerFunc by
|
2017-09-25 03:08:07 -05:00
|
|
|
// populating request.Context with any objects from the URL params
|
|
|
|
func (s *Server) wrapHandler(handler http.HandlerFunc) httprouter.Handle {
|
|
|
|
return func(w http.ResponseWriter, req *http.Request, params httprouter.Params) {
|
|
|
|
w.Header().Set("Access-Control-Allow-Origin", "*")
|
|
|
|
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
|
|
|
|
|
2019-11-19 10:16:42 -06:00
|
|
|
ctx := req.Context()
|
2017-09-25 03:08:07 -05:00
|
|
|
|
|
|
|
if id := params.ByName("nodeid"); id != "" {
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
var nodeID enode.ID
|
2017-09-25 03:08:07 -05:00
|
|
|
var node *Node
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
if nodeID.UnmarshalText([]byte(id)) == nil {
|
2017-09-25 03:08:07 -05:00
|
|
|
node = s.network.GetNode(nodeID)
|
|
|
|
} else {
|
|
|
|
node = s.network.GetNodeByName(id)
|
|
|
|
}
|
|
|
|
if node == nil {
|
|
|
|
http.NotFound(w, req)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
ctx = context.WithValue(ctx, "node", node)
|
|
|
|
}
|
|
|
|
|
|
|
|
if id := params.ByName("peerid"); id != "" {
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
var peerID enode.ID
|
2017-09-25 03:08:07 -05:00
|
|
|
var peer *Node
|
all: new p2p node representation (#17643)
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
2018-09-24 17:59:00 -05:00
|
|
|
if peerID.UnmarshalText([]byte(id)) == nil {
|
2017-09-25 03:08:07 -05:00
|
|
|
peer = s.network.GetNode(peerID)
|
|
|
|
} else {
|
|
|
|
peer = s.network.GetNodeByName(id)
|
|
|
|
}
|
|
|
|
if peer == nil {
|
|
|
|
http.NotFound(w, req)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
ctx = context.WithValue(ctx, "peer", peer)
|
|
|
|
}
|
|
|
|
|
|
|
|
handler(w, req.WithContext(ctx))
|
|
|
|
}
|
|
|
|
}
|