Tests if addresses are online or offline by briefly connecting to them. These short lived connections are referred to as feeler connections. Feeler connections are designed to increase the number of fresh online addresses in tried by selecting and connecting to addresses in new. One feeler connection is attempted on average once every two minutes.
This change was suggested as Countermeasure 4 in
Eclipse Attacks on Bitcoin’s Peer-to-Peer Network, Ethan Heilman,
Alison Kendler, Aviv Zohar, Sharon Goldberg. ePrint Archive Report
2015/263. March 2015.
* Use CNode::addeName to track whether a connection to a name is already open
* A new connection to a previously-connected by-name addednode is only opened when
the previous one closes (even if the name starts resolving to something else)
* At most one connection is opened per addednode (even if the name resolves to multiple)
* Unify the code between ThreadOpenAddedNodeConnections and getaddednodeinfo
* Information about open connections is always returned, and the dns argument becomes a dummy
* An IP address and inbound/outbound is only reported for the (at most 1) open connection
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
* CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state
* CAddrDB modified to make unit tests possible
* Regression test created to ensure bug is fixed
* StartNode modifed to clear adrman if CAddrDB::Read returns an error code.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
Mruset setInventoryKnown was reduced to a remarkably small 1000
entries as a side effect of sendbuffer size reductions in 2012.
This removes setInventoryKnown filtering from merkleBlock responses
because false positives there are especially unattractive and
also because I'm not sure if there aren't race conditions around
the relay pool that would cause some transactions there to
be suppressed. (Also, ProcessGetData was accessing
setInventoryKnown without taking the required lock.)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.