This function is not used in the code base, so probably safe to do rename, or remove in its entirety, but I'm assuming the logic from the original creator still applies so rename probably better.
* all: add thousandths separators for big numbers on log messages
* p2p/sentry: drop accidental file
* common, log: add fast number formatter
* common, eth/protocols/snap: simplifty fancy num types
* log: handle nil big ints
This PR implements the first one of the "lespay" UDP queries which
is already useful in itself: the capacity query. The server pool is making
use of this query by doing a cheap UDP query to determine whether it is
worth starting the more expensive TCP connection process.
Both Hash and Address have a String method, which returns the value as
hex with 0x prefix. They also had a Format method which tried to print
the value using printf of []byte. The way Format worked was at odds with
String though, leading to a situation where fmt.Sprintf("%v", hash)
returned the decimal notation and hash.String() returned a hex string.
This commit makes it consistent again. Both types now support the %v,
%s, %q format verbs for 0x-prefixed hex output. %x, %X creates
unprefixed hex output. %d is also supported and returns the decimal
notation "[1 2 3...]".
For Address, the case of hex characters in %v, %s, %q output is
determined using the EIP-55 checksum. Using %x, %X with Address
disables checksumming.
Co-authored-by: Felix Lange <fjl@twurst.com>
ToHex was deprecated a couple years ago. The last remaining use
was in ToHexArray, which itself only had a single call site.
This just moves ToHexArray near its only remaining call site and
implements it using hexutil.Encode. This changes the default behaviour
of ToHexArray and with it the behaviour of eth_getProof. Previously we
encoded an empty slice as 0, now the empty slice is encoded as 0x.
* accounts, signer: implement gnosis safe support
* common/math: add type for marshalling big to dec
* accounts, signer: properly sign gnosis requests
* signer, clef: implement account_signGnosisTx
* signer: fix auditlog print, change rpc-name (signGnosisTx to signGnosisSafeTx)
* signer: pass validation-messages/warnings to the UI for gnonsis-safe txs
* signer/core: minor change to validationmessages of typed data
* core: use uint64 for total tx costs instead of big.Int
* core: added local tx pool test case
* core, crypto: various allocation savings regarding tx handling
* Update core/tx_list.go
* core: added tx.GasPriceIntCmp for comparison without allocation
adds a method to remove unneeded allocation in comparison to tx.gasPrice
* core: handle pools full of locals better
* core/tests: benchmark for tx_list
* core/txlist, txpool: save a reheap operation, avoid some bigint allocs
Co-authored-by: Martin Holst Swende <martin@swende.se>
The leaks were mostly in unit tests, and could all be resolved by
adding suitably-sized channel buffers or by restructuring the test
to not send on a channel after an error has occurred.
There is an unavoidable goroutine leak in Console.Interactive: when
we receive a signal, the line reader cannot be unblocked and will get
stuck. This leak is now documented and I've tried to make it slightly
less bad by adding a one-element buffer to the output channels of
the line-reading loop. Should the reader eventually awake from its
blocked state (i.e. when stdin is closed), at least it won't get stuck
trying to send to the interpreter loop which has quit long ago.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change adds tests for the virtual clock and aligns the interface
with the time package by renaming Cancel to Stop. It also removes the
binary search from Stop because it complicates the code unnecessarily.
Gollvm has very aggressive dead code elimination that completely
removes one of these two benchmarks. To prevent this, use the
result of the benchmark (a boolean), and to be "fair", make the
transformation to both benchmarks.
To be reliably assured of not removing the code, "use" means
assigning to an exported global. Non-exported globals and
//go:noinline functions are possibly subject to this optimization.
* accounts/abi/bind: Accept function ptr parameter
They are translated as [24]byte
* Add Java template version
* accounts/abi/bind: fix merge issue
* Fix CI
* core: reinit chain from freezer in batches
* core/rawdb: concurrent database reinit from freezer dump
* core/rawdb: reinit from freezer in sequential order
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* core/vm: remove function call for stack validation from evm runloop
* core/vm: separate gas calc into static + dynamic
* core/vm: optimize push1
* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER
* core/vm: use generic error message for jump/jumpi, to avoid string interpolation
* testdata: fix tests for new error message
* core/vm: use 64-bit memory calculations
* core/vm: fix error in memory calculation
* core/vm: address review concerns
* core/vm: avoid unnecessary use of big.Int:BitLen()
* common/fdlimit: cap on MacOS file limits, fixes#18994
* common/fdlimit: fix Maximum-check to respect OPEN_MAX
* common/fdlimit: return error if OPEN_MAX is exceeded in Raise()
* common/fdlimit: goimports
* common/fdlimit: check value after setting fdlimit
* common/fdlimit: make comment a bit more descriptive
* cmd/utils: make fdlimit happy path a bit cleaner
* Initial work on a graphql API
* Added receipts, and more transaction fields.
* Finish receipts, add logs
* Add transactionCount to block
* Add types and .
* Update Block type to be compatible with ethql
* Rename nonce to transactionCount in Account, to be compatible with ethql
* Update transaction, receipt and log to match ethql
* Add query operator, for a range of blocks
* Added ommerCount to Block
* Add transactionAt and ommerAt to Block
* Added sendRawTransaction mutation
* Add Call and EstimateGas to graphQL API
* Refactored to use hexutil.Bytes instead of HexBytes
* Replace BigNum with hexutil.Big
* Refactor call and estimateGas to use ethapi struct type
* Replace ethgraphql.Address with common.Address
* Replace ethgraphql.Hash with common.Hash
* Converted most quantities to Long instead of Int
* Add support for logs
* Fix bug in runFilter
* Restructured Transaction to work primarily with headers, so uncle data is reported properly
* Add gasPrice API
* Add protocolVersion API
* Add syncing API
* Moved schema into its own source file
* Move some single use args types into anonymous structs
* Add doc-comments
* Fixed backend fetching to use context
* Added (very) basic tests
* Add documentation to the graphql schema
* Fix reversion for formatting of big numbers
* Correct spelling error
* s/BigInt/Long/
* Update common/types.go
* Fixes in response to review
* Fix lint error
* Updated calls on private functions
* Fix typo in graphql.go
* Rollback ethapi breaking changes for graphql support
Co-Authored-By: Arachnid <arachnid@notdot.net>
* first impl of eth_getProof
* fixed docu
* added comments and refactored based on comments from holiman
* created structs
* handle errors correctly
* change Value to *hexutil.Big in order to have the same output as parity
* use ProofList as return type
This PR implements les.freeClientPool. It also adds a simulated clock
in common/mclock, which enables time-sensitive tests to run quickly
and still produce accurate results, and package common/prque which is
a generalised variant of prque that enables removing elements other
than the top one from the queue.
les.freeClientPool implements a client database that limits the
connection time of each client and manages accepting/rejecting
incoming connections and even kicking out some connected clients. The
pool calculates recent usage time for each known client (a value that
increases linearly when the client is connected and decreases
exponentially when not connected). Clients with lower recent usage are
preferred, unknown nodes have the highest priority. Already connected
nodes receive a small bias in their favor in order to avoid accepting
and instantly kicking out clients.
Note: the pool can use any string for client identification. Using
signature keys for that purpose would not make sense when being known
has a negative value for the client. Currently the LES protocol
manager uses IP addresses (without port address) to identify clients.
This package was meant to hold an improved 256 bit integer library, but
the effort was abandoned in 2015. AFAIK nothing ever used this package.
Time to say goodbye.
Allow the --abi flag to be given - to indicate that it should read the
ABI information from standard input. It expects to read the solc output
with the --combined-json flag providing bin, abi, userdoc, devdoc, and
metadata, and works very similarly to the internal invocation of solc,
except it allows external invocation of solc.
This facilitates integration with more complex solc invocations, such
as invocations that require path remapping or --allow-paths tweaks.
Simple usage example:
solc --combined-json bin,abi,userdoc,devdoc,metadata *.sol | abigen --abi -
* common: delete StringToAddress, StringToHash
These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).
* eth/filters: remove incorrect use of common.BytesToAddress
* common/fdlimit: Move fdlimit files to separate package
When go-ethereum is used as a library the calling program need to set
the FD limit.
This commit extract fdlimit files to a separate package so it can be
used outside of go-ethereum.
* common/fdlimit: Remove FdLimit from functions signature
* common/fdlimit: Rename fdlimit functions
This adds type and struct field context to error messages.
Instead of "hex string of odd length" users will now see "json: cannot
unmarshal hex string of odd length into Go struct field SendTxArgs.from
of type common.Address".
Byte padding function should return the given slice if the length is
smaller or equal rather than *only* smaller than.
This fix improves almost all EVM push operations.
* p2p/discover, p2p/discv5: add marshaling methods to Node
* p2p/netutil: make Netlist decodable from TOML
* common/math: encode nil HexOrDecimal256 as 0x0
* cmd/geth: add --config file flag
* cmd/geth: add missing license header
* eth: prettify Config again, fix tests
* eth: use gasprice.Config instead of duplicating its fields
* eth/gasprice: hide nil default from dumpconfig output
* cmd/geth: hide genesis block in dumpconfig output
* node: make tests compile
* console: fix tests
* cmd/geth: make TOML keys look exactly like Go struct fields
* p2p: use discovery by default
This makes the zero Config slightly more useful. It also fixes package
node tests because Node detects reuse of the datadir through the
NodeDatabase.
* cmd/geth: make ethstats URL settable through config file
* cmd/faucet: fix configuration
* cmd/geth: dedup attach tests
* eth: add comment for DefaultConfig
* eth: pass downloader.SyncMode in Config
This removes the FastSync, LightSync flags in favour of a more
general SyncMode flag.
* cmd/utils: remove jitvm flags
* cmd/utils: make mutually exclusive flag error prettier
It now reads:
Fatal: flags --dev, --testnet can't be used at the same time
* p2p: fix typo
* node: add DefaultConfig, use it for geth
* mobile: add missing NoDiscovery option
* cmd/utils: drop MakeNode
This exposed a couple of places that needed to be updated to use
node.DefaultConfig.
* node: fix typo
* eth: make fast sync the default mode
* cmd/utils: remove IPCApiFlag (unused)
* node: remove default IPC path
Set it in the frontends instead.
* cmd/geth: add --syncmode
* cmd/utils: make --ipcdisable and --ipcpath mutually exclusive
* cmd/utils: don't enable WS, HTTP when setting addr
* cmd/utils: fix --identity
Metadata is provided as JSON string, rather than as JSON object. This
ensures that we can decode to a set of bytes that will be consistent
with the swarm hash embedded in the code, without worrying about
ambiguities of spacing, ordering, or escaping.