Tests if addresses are online or offline by briefly connecting to them. These short lived connections are referred to as feeler connections. Feeler connections are designed to increase the number of fresh online addresses in tried by selecting and connecting to addresses in new. One feeler connection is attempted on average once every two minutes.
This change was suggested as Countermeasure 4 in
Eclipse Attacks on Bitcoin’s Peer-to-Peer Network, Ethan Heilman,
Alison Kendler, Aviv Zohar, Sharon Goldberg. ePrint Archive Report
2015/263. March 2015.
* Use CNode::addeName to track whether a connection to a name is already open
* A new connection to a previously-connected by-name addednode is only opened when
the previous one closes (even if the name starts resolving to something else)
* At most one connection is opened per addednode (even if the name resolves to multiple)
* Unify the code between ThreadOpenAddedNodeConnections and getaddednodeinfo
* Information about open connections is always returned, and the dns argument becomes a dummy
* An IP address and inbound/outbound is only reported for the (at most 1) open connection
This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
- Ban/Unban/ClearBan call uiInterface.BannedListChanged() as necessary
- Ban/Unban/ClearBan sync to disk if the operation is user-invoked
- Mark node for disconnection automatically when banning
- Lock cs_vNodes while setting disconnected
- Don't spin in a tight loop while setting disconnected
Locking for each operation here is unnecessary, and solves the wrong problem.
Additionally, it introduces a problem when cs_vNodes is held in an owning
class, to which invididual CNodeRefs won't have access.
These should be weak pointers anyway, once vNodes contain shared pointers.
Rather than using a refcounting class, use a 3-step process instead.
1. Lock vNodes long enough to snapshot the fields necessary for comparing
2. Unlock and do the comparison
3. Re-lock and mark the resulting node for disconnection if it still exists
* CAddrDB modified so that when de-serialization code throws an exception Addrman is reset to a clean state
* CAddrDB modified to make unit tests possible
* Regression test created to ensure bug is fixed
* StartNode modifed to clear adrman if CAddrDB::Read returns an error code.
Some developers clearly don't get this and have been posting
"improvements" that create clear vulnerabilities. It should
have been better explained in the code, since the design
is somewhat subtle and getting it right is important.
DumpBanList currently does this:
- with lock: take a copy of the banmap
- perform I/O (write out the banmap)
- with lock: mark the banmap non-dirty
If a new ban is added during the I/O operation, it may never be persisted to
disk.
Reorder operations so that the data to be persisted cannot be older than the
time at which the banmap was marked non-dirty.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Note: Some seeds aren't actually returning an IP for their name entries, so
they're being added to addrman with a source of [::].
This commit shouldn't change that behavior, for better or worse.
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).