In spite of the name FindLatestBefore used std::lower_bound to try
to find the earliest block with a nTime greater or equal to the
the requested value. But lower_bound uses bisection and requires
the input to be ordered with respect to the comparison operation.
Block times are not well ordered.
I don't know what lower_bound is permitted to do when the data
is not sufficiently ordered, but it's probably not good.
(I could construct an implementation which would infinite loop...)
To resolve the issue this commit introduces a maximum-so-far to the
block indexes and searches that.
For clarity the function is renamed to reflect what it actually does.
An issue that remains is that there is no grace period in importmulti:
If a address is created at time T and a send is immediately broadcast
and included by a miner with a slow clock there may not yet have been
any block with at least time T.
The normal rescan has a grace period of 7200 seconds, but importmulti
does not.
There is still a call to ActivateBestChain with cs_main if a peer
requests the block prior to it being validated, but this one is
more specifically-gated, so should be less of an issue.
AcceptToMemoryPool has several classes of return false statements.
- return state.Invalid or state.DoS directly itself
- return false and set fMissingInputs (state is valid)
- return false and state is set by failed CheckTransaction
- return false and state is set by failed CheckInputs.
This commit patches the last case where the state variable was reused for additional calls to CheckInputs to identify witness stripping as cause of validation failure. After this commit, it should be the case that if !fMissingInputs, state is always Invalid if AcceptToMemoryPool returns false.
On repeated calls to SelectCoins we try to meet the fee necessary for the last transaction, the new fee required might be smaller, so increase our change by the difference if we can.
Once we've picked coins and dummy-signed the transaction to calculate fee, if we don't have sufficient fee, then try to meet the fee by reducing change before resorting to picking new coins.
Previously addnodes were in competition with outbound connections
for access to the eight outbound slots.
One result of this is that frequently a node with several addnode
configured peers would end up connected to none of them, because
while the addnode loop was in its two minute sleep the automatic
connection logic would fill any free slots with random peers.
This is particularly unwelcome to users trying to maintain links
to specific nodes for fast block relay or purposes.
Another result is that a group of nine or more nodes which are
have addnode configured towards each other can become partitioned
from the public network.
This commit introduces a new limit of eight connections just for
addnode peers which is not subject to any of the other connection
limitations (including maxconnections).
The choice of eight is sufficient so that under no condition would
a user find themselves connected to fewer addnoded peers than
previously. It is also low enough that users who are confused
about the significance of more connections and have gotten too
copy-and-paste happy will not consume more than twice the slot
usage of a typical user.
Any additional load on the network resulting from this will likely
be offset by a reduction in users applying even more wasteful
workaround for the prior behavior.
The retry delays are reduced to avoid nodes sitting around without
their added peers up, but are still sufficient to prevent overly
aggressive repeated connections. The reduced delays also make
the system much more responsive to the addnode RPC.
Ban-disconnects are also exempted for peers added via addnode since
the outbound addnode logic ignores bans. Previously it would ban
an addnode then immediately reconnect to it.
A minor change was also made to CSemaphoreGrant so that it is
possible to re-acquire via an object whos grant was moved.
Performing signing in the inner loop has terrible performance
when many passes through are needed to complete the selection.
Signing before the algorithm is complete also gets in the way
of correctly setting the fee (e.g. preventing over-payment when
the fee required goes down on the final selection.)
Use of the dummy might overpay on the signatures by a couple bytes
in uncommon cases where the signatures' DER encoding is smaller
than the dummy: Who cares?