Previously it was possible for HaveCoinInCache to return true for a spent
coin. It is more clear to keep the semantics the same. HaveCoinInCache is
used for two reasons:
- tracking coins we may want to uncache, in which case it is unlikely there
would be spent coins we could uncache (not dirty)
- optimistically checking whether we have already included a tx in the
blockchain, in which case a spent coin is not a reliable indicator that we have.
We can call gettxoutproof() with a list of transactions. Currently, if
the first transaction is unspent (and all other transactions are in the
same block), then the call will succeed. If the first transaction has
been spent, then the call will fail. The means that the following two
calls will return different results:
gettxoutproof(unspent_tx1, spent_tx1)
gettxoutproof(spent_tx1, unspent_tx1)
This commit makes behaviour independent of transaction ordering by looping
through all transactions provided and trying to find which block they're in.
This commit also increases the test coverage and tests more failure
cases for gettxoutproof()
Added an option to configure to allow for branch coverage statistics gathering.
Disabled logprint macro when coverage testing is on so that unnecessary branches are not analyzed.
The implementation of base_uint::operator++(int) and base_uint::operator--(int) is now safer.
Array pn is accessed via index i after bounds checking has been performed on the index, rather than before.
The logic of the while loops has also been made more clear.
A compile time assertion has been added in the class constructors to ensure that BITS is a positive multiple of 32.
CCoinsViewCache doesn't actually support cursor iteration returning the
current contents of the cache, so raise an error when the cursor method is
called instead of returning a cursor that iterates over stale data.
Also update the gettxoutsetinfo RPC which was relying on the old behavior to be
explicit about which view it is returning data about.
This adds a new CuckooCache in validation, caching whether all of a
transaction's scripts were valid with a given set of script flags.
Unlike previous attempts at caching an entire transaction's
validity, which have nearly universally introduced consensus
failures, this only caches the validity of a transaction's
scriptSigs. As these are pure functions of the transaction and
data it commits to, this should be much safer.
This is somewhat duplicative with the sigcache, as entries in the
new cache will also have several entries in the sigcache. However,
the sigcache is kept both as ATMP relies on it and because it
prevents malleability-based DoS attacks on the new higher-level
cache. Instead, the -sigcachesize option is re-used - cutting the
sigcache size in half and using the newly freed memory for the
script execution cache.
Transactions which match the script execution cache never even have
entries in the script check thread's workqueue created.
Note that the cache is indexed only on the script execution flags
and the transaction's witness hash. While this is sufficient to
make the CScriptCheck() calls pure functions, this introduces
dependancies on the mempool calculating things such as the
PrecomputedTransactionData object, filling the CCoinsViewCache, etc
in the exact same way as ConnectBlock. I belive this is a reasonable
assumption, but should be noted carefully.
In a rather naive benchmark (reindex-chainstate up to block 284k
with cuckoocache always returning true for contains(),
-assumevalid=0 and a very large dbcache), this connected blocks
~1.7x faster.
Prior to per-utxo CCoins, we checked that no other in-mempool tx
spent any of the given transaction's outputs, as we don't want to
uncache that entire tx in such a case. However, we now are checking
only that there exists no other mempool spends of the same output,
which should clearly be impossible after we removed the transaction
which was spending said output (barring massive mempool
inconsistency).
Thanks to @sdaftuar for the suggestion.
At startup, we choose one peer to serve us the headers chain, until
our best header is close to caught up. Disconnect this peer if more
than 15 minutes + 1ms/expected_header passes and our best header
is still more than 1 day away from current time.