Amend to d5f1e72. It turns out that BerkelyDB was including inttypes.h
indirectly, so we cannot fix this with just macros.
Trivial commit: apply the following script to all .cpp and .h files:
# Middle
sed -i 's/"PRIx64"/x/g' "$1"
sed -i 's/"PRIu64"/u/g' "$1"
sed -i 's/"PRId64"/d/g' "$1"
# Initial
sed -i 's/PRIx64"/"x/g' "$1"
sed -i 's/PRIu64"/"u/g' "$1"
sed -i 's/PRId64"/"d/g' "$1"
# Trailing
sed -i 's/"PRIx64/x"/g' "$1"
sed -i 's/"PRIu64/u"/g' "$1"
sed -i 's/"PRId64/d"/g' "$1"
After this commit, `git grep` for PRI.64 should turn up nothing except
the defines in util.h.
As the tinyformat-based formatting system (introduced in b77dfdc) is
type-safe, no special format characters are needed to specify sizes.
Tinyformat can support (ignore) the C99 prefixes such as "ll" but
chokes on MSVC's inttypes.h defines prefixes such as "I64X". So don't
include inttypes.h and define our own for compatibility.
(an alternative would be to sweep the entire codebase using sed -i to
get rid of the size specifiers but this has less diff impact)
contrib/devtools/fix-copyright-headers.py script to be able to perform this maintenance task with ease during the rest of the year, every year. Modifications to contrib/devtools/README.md to document what fix-copyright-headers.py does.
Keep track of which block is being requested (and to be requested) from
each peer, and limit the number of blocks in-flight per peer. In addition,
detect stalled downloads, and disconnect if they persist for too long.
This means blocks are never requested twice, and should eliminate duplicate
downloads during synchronization.
This was a leftover from the times in which
peers.dat depended in BDB.
Other functions in db.cpp still depend on BerkelyDB,
to be able to compile without BDB this (small)
functionality needs to be moved to another file.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
INIT_PROTO_VERSION is the initial version, after a succesful version/verack it is increased to a negotiated version.
MIN_PEER_PROTO_VERSION could be a different value to disconnect from peers older than a specified version.
The existing CNode::addrLocal member is revealed to the user,
as an address string, similar to the existing "addr" field.
Instead of showing garbage or empty string,
it simply will not appear in the output if local address not known yet.
New RPC "ping" command to request ping.
Implemented "pong" message handler.
New "pingtime" field in getpeerinfo, to provide results to user.
New "pingwait" field, to show pings still in flight, to better see newly lagging peers.
This reduces a peer's ability to attack network resources by
using a full bloom filter, but without reducing the usability
of bloom filters. It sets a default match everything filter
for peers and it generalizes a prior optimization to
cover more cases.
Added explicit include of main.h in init.cpp, changed include of init.h to include of main.h in net.cpp.
Added function registration for net.cpp in init.cpp's network initialization.
Removed protocol.cpp's dependency on main.h.
TODO: Remove main.h include in net.cpp.
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
It seems there were two mechanisms for assessing whether a CNode
was still in use: a refcount and a release timestamp. The latter
seems to have been there for a long time, as a safety mechanism.
However, this timer also keeps CNode objects alive for far longer
than necessary after disconnects, potentially opening up a DoS
window.
This commit removes the timestamp-based mechanism, and replaces
it with an assert(nRefCount >= 0), to verify that the refcounting
is indeed correctly working.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.