Previously peers which implement a protocol version less than NO_BLOOM_VERSION
would not be disconnected for sending a filter command, regardless of the
peerbloomfilter option.
Many node operators do not wish to provide expensive bloom filtering for SPV
clients, previously they had to cherry pick the commit which enabled the
disconnect logic.
The default should remain false until a sufficient percent of SPV clients
have updated.
1) Chainparams: Explicit CChainParams arg for main:
-AcceptBlock
-AcceptBlockHeader
-ActivateBestChain
-ConnectTip
-InitBlockIndex
-LoadExternalBlockFile
-VerifyDB parametric constructor
2) Also pickup more Params()\. in main.cpp
3) Pass nPruneAfterHeight explicitly to new FindFilesToPrune() in main.cpp
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.
mapAlreadyAskedFor does not keep track of which peer has a request queued for a
particular tx. As a result, a peer can blind a node to a tx indefinitely by
sending many invs for the same tx, and then never replying to getdatas for it.
Each inv received will be placed 2 minutes farther back in mapAlreadyAskedFor,
so a short message containing 10 invs would render that tx unavailable for 20
minutes.
This is fixed by disallowing a peer from having more than one entry for a
particular inv in mapAlreadyAskedFor at a time.
Previously in blocks only mode all inv messages where type!=MSG_BLOCK would be
rejected without regard for whitelisting or whitelistalwaysrelay.
As such whitelisted peers would never send the transaction (which would be
processed).
Compute the value of inputs that already are in the chain at time of mempool entry and only increase priority due to aging for those inputs. This effectively changes the CTxMemPoolEntry's GetPriority calculation from an upper bound to a lower bound.
It's possible coins with the same hash exist when you create a duplicate coinbase, so previously we were reading from the database to make sure we had the old coins cached so if we were to spend the new ones, the old ones would also be spent. This pull instead just marks the new coins as not fresh if they are from a coinbase, so if they are spent they will be written all the way down to the database anyway overwriting any duplicates.
Previously it would break if you flushed a parent cache while there was a child cache referring to it. This change will allow the flushing of parent caches.
This provides more conservative estimates and reacts more quickly to a backlog.
Unfortunately the unit test for fee estimation depends on the success threshold (and the decay) chosen; also modify the unit test for the new default success thresholds.
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.