Given that in default GetSerializeSize implementations created by
ADD_SERIALIZE_METHODS we're already using CSizeComputer(), get rid
of the specialized GetSerializeSize methods everywhere, and just use
CSizeComputer. This removes a lot of code which isn't actually used
anywhere.
For CCompactSize and CVarInt this actually removes a more efficient
size computing algorithm, which is brought back in a later commit.
The current getblocktxn implementation drops and ignores requests for old
blocks, which causes occasional sync_block timeouts during the
p2p-compactblocks.py test as reported in
https://github.com/bitcoin/bitcoin/issues/8842.
The p2p-compactblocks.py test setup creates many new blocks in a short
period of time, which can lead to getblocktxn requests for blocks below the
hardcoded depth limit of 10 blocks. This commit changes the getblocktxn
handler not to ignore these requests, so the peer nodes in the test setup
will reliably be able to sync.
The protocol change is documented in BIP-152 update "Allow block responses
to getblocktxn requests" at https://github.com/bitcoin/bips/pull/469.
The protocol change is not expected to affect nodes running outside the test
environment, because there shouldn't normally be lots of new blocks being
rapidly added that need to be synced.
The changes here are dense and subtle, but hopefully all is more explicit
than before.
- CConnman is now in charge of sending data rather than the nodes themselves.
This is necessary because many decisions need to be made with all nodes in
mind, and a model that requires the nodes calling up to their manager quickly
turns to spaghetti.
- The per-node-serializer (ssSend) has been replaced with a (quasi-)const
send-version. Since the send version for serialization can only change once
per connection, we now explicitly tag messages with INIT_PROTO_VERSION if
they are sent before the handshake. With this done, there's no need to lock
for access to nSendVersion.
Also, a new stream is used for each message, so there's no need to lock
during the serialization process.
- This takes care of accounting for optimistic sends, so the
nOptimisticBytesWritten hack can be removed.
- -dropmessagestest and -fuzzmessagestest have not been preserved, as I suspect
they haven't been used in years.
This introduces a 'minimum chain work' chainparam which is intended
to be the known amount of work in the chain for the network at the
time of software release. If you don't have this much work, you're
not yet caught up.
This is used instead of the count of blocks test from checkpoints.
This criteria is trivial to keep updated as there is no element of
subjectivity, trust, or position dependence to it. It is also a more
reliable metric of sync status than a block count.
This will result in many more calls to CheckBlockIndex when
connecting a list of headers (eg in ::HEADERS messages processing)
but its only enabled in debug mode, and that should mostly just be
during IBD, so it should be OK.
UnloadBlockIndex is only used during init if we end up reindexing
to clear our block state so that we can start over. However, at
that time no connections have been brought up as CConnman hasn't
been started yet, so all of the network processing state logic is
empty when its called.
Additionally, the initialization of the recentRejects set is moved
to InitPeerLogic.
This change is needed to prevent sync_blocks timeouts in the mempool_reorg
test after the sync_blocks update in the upcoming commit
"[qa] Change sync_blocks to pick smarter maxheight".
This change was initially suggested by Suhas Daftuar <sdaftuar@chaincode.com>
in https://github.com/bitcoin/bitcoin/pull/8680#r78209060
Note that this is not a major issue as, in order for the missing
lock to cause issues, you have to receive a GETBLOCKTXN message
while reindexing, adding a block header via RPC, etc, which results
in either a table rehash or an insert into the bucket which you are
currently looking at.
There are only a few uses of `insecure_random` outside the tests.
This PR replaces uses of insecure_random (and its accompanying global
state) in the core code with an FastRandomContext that is automatically
seeded on creation.
This is meant to be used for inner loops. The FastRandomContext
can be in the outer scope, or the class itself, then rand32() is used
inside the loop. Useful e.g. for pushing addresses in CNode or the fee
rounding, or randomization for coin selection.
As a context is created per purpose, thus it gets rid of
cross-thread unprotected shared usage of a single set of globals, this
should also get rid of the potential race conditions.
- I'd say TxMempool::check is not called enough to warrant using a special
fast random context, this is switched to GetRand() (open for
discussion...)
- The use of `insecure_rand` in ConnectThroughProxy has been replaced by
an atomic integer counter. The only goal here is to have a different
credentials pair for each connection to go on a different Tor circuit,
it does not need to be random nor unpredictable.
- To avoid having a FastRandomContext on every CNode, the context is
passed into PushAddress as appropriate.
There remains an insecure_random for test usage in `test_random.h`.
This adds a new CValidationInterface subclass, defined in main.h,
to receive notifications of UpdatedBlockTip and use that to push
blocks to peers, instead of doing it directly from
ActivateBestChain.
In anticipation of making all the callbacks out of block processing
flow through it. Note that vHashes will always have something in it
since pindexFork != pindexNewTip.
This fixes a bug where we might (in exceedingly rare circumstances)
accidentally ban a node for sending us the first (potentially few)
segwit blocks in non-segwit mode.
75ead758 turned these into crashes in the event of a handshake failure, most
notably when a peer does not offer the expected services.
There are likely other cases that these assertions will find as well.
In principle, the checksums of P2P packets are simply 4-byte blobs which
are the first four bytes of SHA256(SHA256(payload)).
Currently they are handled as little-endian 32-bit integers half of the
time, as blobs the other half, sometimes copying the one to the other,
resulting in somewhat confused code.
This PR changes the handling to be consistent both at packet creation
and receiving, making it (I think) easier to understand.
An example of where this might be useful is allowing a node to connect blocksonly during IBD but then becoming a full-node once caught up with the latest block. This might also even want to be the default behaviour since during IBD most TXs appear to be orphans, and are routinely dropped (for example when a node disconnects). Therefore, this can waste a lot of bandwidth.
Additionally, another pull could be written to stop relaying of TXs to nodes that are clearly far behind the latest block and are running a node that doesn't store many orphan TXs, such as recent versions of Bitcoin Core.