Previously in blocks only mode all inv messages where type!=MSG_BLOCK would be
rejected without regard for whitelisting or whitelistalwaysrelay.
As such whitelisted peers would never send the transaction (which would be
processed).
Compute the value of inputs that already are in the chain at time of mempool entry and only increase priority due to aging for those inputs. This effectively changes the CTxMemPoolEntry's GetPriority calculation from an upper bound to a lower bound.
Previously it would break if you flushed a parent cache while there was a child cache referring to it. This change will allow the flushing of parent caches.
This provides more conservative estimates and reacts more quickly to a backlog.
Unfortunately the unit test for fee estimation depends on the success threshold (and the decay) chosen; also modify the unit test for the new default success thresholds.
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.
Useful now that BIP113 is enforced for transactions entering the
mempool. Was previously only (indirectly) available by calling
getblocktemplate, a relatively expensive RPC call.
This continues/fixes #6719.
`event_base_loopbreak` was not doing what I expected it to, at least in
libevent 2.0.21.
What I expected was that it sets a timeout, given that no other pending
events it would exit in N seconds. However, what it does was delay the
event loop exit with 10 seconds, even if nothing is pending.
Solve it in a different way: give the event loop thread time to exit
out of itself, and if it doesn't, send loopbreak.
This speeds up the RPC tests a lot, each exit incurred a 10 second
overhead, with this change there should be no shutdown overhead in the
common case and up to two seconds if the event loop is blocking.
As a bonus this breaks dependency on boost::thread_group, as the HTTP
server minds its own offspring.