Move to AppInitServers. This doesn't have any effects on bitcoin behavior. It
was just strange to have this unrelated code in the middle or parameter
interaction.
If our tip hasn't updated in a while, that may be because our peers are
not relaying blocks to us that we would consider valid. Allow connection
to an additional outbound peer in that circumstance.
Also, periodically check to see if we are exceeding our target number of
outbound peers, and disconnect the one which has least recently
announced a new block to us (choosing the newest such peer in the case
of tie).
A rare race condition may trigger while awaiting the body of a message, see
upsteam commit 5ff8eb26371c4dc56f384b2de35bea2d87814779 for details.
This may fix some reported rpc hangs/crashes.
This tracks the set of all known invalid-themselves blocks (ie
blocks which we attempted to connect but which were found to be
invalid). This is used to cheaply check if new headers build on an
invalid chain.
While we're at it we also resolve an edge-case in invalidateblock
on pruned nodes which results in them needing a reindex if they
fail to reorg.
This is a simple change that makes our accept requirements the
same as our request requirements, (ever so slightly) further
decoupling our consensus logic from our FindNextBlocksToDownload
logic in net_processing.
There is no reason to wish to store blocks on disk always just
because a peer is whitelisted. This appears to be a historical
quirk to avoid breaking things when the accept limits were added.
Nowhere else in the protocol do we send headers which are for
blocks we have not fully validated except in response to getheaders
messages with a null locator. On my public node I have not seen any
such request (whether for an invalid block or not) in at least two
years of debug.log output, indicating that this should have minimal
impact.
Reading the variable mapBlockIndex requires holding the mutex cs_main.
The new "Disconnect outbound peers relaying invalid headers" code
added in commit 37886d5e2f and merged
as part of #11568 two days ago did not lock cs_main prior to accessing
mapBlockIndex.
Warnings prior to this commit:
```
addrman.cpp:390:24: warning: comparison of integers of different signs: 'size_type' (aka 'unsigned long') and 'int' [-Wsign-compare]
if (vRandom.size() != nTried + nNew)
~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~
addrman.cpp:411:52: warning: comparison of integers of different signs: 'int' and 'size_type' (aka 'unsigned long') [-Wsign-compare]
if (info.nRandomPos < 0 || info.nRandomPos >= vRandom.size() || vRandom[info.nRandomPos] != n)
~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
addrman.cpp:419:25: warning: comparison of integers of different signs: 'size_type' (aka 'unsigned long') and 'int' [-Wsign-compare]
if (setTried.size() != nTried)
~~~~~~~~~~~~~~~ ^ ~~~~~~
addrman.cpp:421:23: warning: comparison of integers of different signs: 'size_type' (aka 'unsigned long') and 'int' [-Wsign-compare]
if (mapNew.size() != nNew)
~~~~~~~~~~~~~ ^ ~~~~
4 warnings generated.
```
Currently we have no rotation of outbound peers. If an outbound peer
stops serving us blocks, or is on a consensus-incompatible chain with
less work than our tip (but otherwise valid headers), then we will never
disconnect that peer, even though that peer is using one of our 8
outbound connection slots. Because we rely on our outbound peers to
find an honest node in order to reach consensus, allowing an
incompatible peer to occupy one of those slots is undesirable,
particularly if it is possible for all such slots to be occupied by such
peers.
Protect against this by always checking to see if a peer's best known
block has less work than our tip, and if so, set a 20 minute timeout --
if the peer is still not known to have caught up to a chain with as much
work as ours after 20 minutes, then send a single getheaders message,
wait 2 more minutes, and if a better header hasn't been received by then,
disconnect that peer.
Note:
- we do not require that our peer sync to the same tip as ours, just an
equal or greater work tip. (Doing otherwise would risk partitioning the
network in the event of a chain split, and is also unnecessary.)
- we pick 4 of our outbound peers and do not subject them to this logic,
to be more conservative. We don't wish to permit temporary network
issues (or an attacker) to excessively disrupt network topology.
Change suggested by Cory Fields <cory-nospam-@coryfields.com> who noticed
listsinceblock would ignore invalid block hashes causing it to return a
completely unfiltered list of transactions.
A peer could try to waste our resources by sending us unrequested blocks with
low work, eg to fill up our disk. Since
e2652002b6 we no longer request blocks until we
know we're on a chain with more than nMinimumChainWork (our anti-DoS
threshold), but we would still process unrequested blocks that had more work
than our tip. This commit fixes that behavior.
Make sure wallet databases have unique fileids. If they don't, throw an error.
BDB caches do not work properly when more than one open database has the same
fileid, because values written to one database may show up in reads to other
databases.
Bitcoin will never create different databases with the same fileid, but users
can create them by manually copying database files.
BDB caching bug was reported by Chris Moore <dooglus@gmail.com>
https://github.com/bitcoin/bitcoin/issues/11429Fixes#11429
Now using a std::unique_ptr, the Db instance is correctly released
when CDB initialization fails.
The internal CDB state and mapFileUseCount are only mutated when
the CDB initialization succeeds.
Note that UpdatedBlockTip is also used in net_processing to
announce new blocks to peers. As this may need additional review,
this change is included in its own commit.