always grow to its maximum size.
This does not go so far as to attempt to connect orphans made
connectable by a new block.
Keeping the orphan map less full helps improve the reliability
of relaying chains of transactions.
We send a newly-accepted peer a 1000-entry addr message, and then only use
vAddrToSend for small messages. Deallocate vAddrToSend after it's been used for
the big message to save about 40 kB per connected inbound peer.
- BIP9DeploymentInfo struct for static deployment info
- VersionBitsDeploymentInfo: Avoid C++11ism by commenting parameter names
- getblocktemplate: Make sure to set deployments in the version if it is LOCKED_IN
- In this commit, all rules are considered required for clients to support
* Switch mapRelay to use shared_ptr<CTransaction>
* Switch the relay code to copy mempool shared_ptr's, rather than copying
the transaction itself.
* Change vRelayExpiration to store mapRelay iterators rather than hashes
(smaller and faster).
Optimistically test the latch bool before taking the lock.
For all IsInitialBlockDownload calls after the first to return false,
this avoids the need to lock cs_main.
Saves about 10% of application memory usage once the mempool warms up. Since the
mempool is DynamicUsage-regulated, this will translate to a larger mempool in
the same amount of space.
Map value type: eliminate the vin index; no users of the map need to know which
input of the transaction is spending the prevout.
Map key type: replace the COutPoint with a pointer to a COutPoint. A COutPoint
is 36 bytes, but each COutPoint is accessible from the same map entry's value.
A trivial DereferencingComparator functor allows indirect map keys, but the
resulting syntax is misleading: `map.find(&outpoint)`. Implement an indirectmap
that acts as a wrapper to a map that uses a DereferencingComparator, supporting
a syntax that accurately reflect the container's semantics: inserts and
iterators use pointers since they store pointers and need them to remain
constant and dereferenceable, but lookup functions take const references.
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
The current logic for syncing headers may lead to lots of duplicate
getheaders requests being sent: If a new block arrives while the node
is in headers sync, it will send getheaders in response to the block
announcement. When the headers arrive, the message will be of maximum
size and so a follow-up request will be sent---all of that in addition
to the existing headers syncing. This will create a second "chain" of
getheaders requests. If more blocks arrive, this may even lead to
arbitrarily many parallel chains of redundant requests.
This patch changes the behaviour to only request more headers after a
maximum-sized message when it contained at least one unknown header.
This avoids sustaining parallel chains of redundant requests.
Note that this patch avoids the issues raised in the discussion of
https://github.com/bitcoin/bitcoin/pull/6821: There is no risk of the
node being permanently blocked. At the latest when a new block arrives
this will trigger a new getheaders request and restart syncing.
ActivateBestChain uses chainActive after releasing the lock; reorder operations
to move all access to synchronized object into existing LOCK(cs_main) block.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we would assert that if every block in vBlockHashesToAnnounce is in
chainActive, then the blocks to be announced must connect. However, there are
edge cases where this assumption could be violated (eg using invalidateblock /
reconsiderblock), so just check for this case and revert to inv-announcement
instead.
Previously Bitcoin would send 1/4 of transactions out to all peers
instantly. This causes high overhead because it makes >80% of
INVs size 1. Doing so harms privacy, because it limits the
amount of source obscurity a transaction can receive.
These randomized broadcasts also disobeyed transaction dependencies
and required use of the orphan pool. Because the orphan pool is
so small this leads to poor propagation for dependent transactions.
When the bypass wasn't in effect, transactions were sent in the
order they were received. This avoided creating orphans but
undermines privacy fairly significantly.
This commit:
Eliminates the bypass. The bypass is replaced by halving the
average delay for outbound peers.
Sorts candidate transactions for INV by their topological
depth then by their feerate (then hash); removing the
information leakage and providing priority service to
higher fee transactions.
Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.
Break the circular dependency between main and txdb by:
- Moving `CBlockFileInfo` from `main.h` to `chain.h`. I think this makes
sense, as the other block-file stuff is there too.
- Moving `CDiskTxPos` from `main.h` to `txdb.h`. This type seems
specific to txdb.
- Pass a functor `insertBlockIndex` to `LoadBlockIndexGuts`. This leaves
it up to the caller how to insert block indices.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).