This is a simple change that makes our accept requirements the
same as our request requirements, (ever so slightly) further
decoupling our consensus logic from our FindNextBlocksToDownload
logic in net_processing.
There is no reason to wish to store blocks on disk always just
because a peer is whitelisted. This appears to be a historical
quirk to avoid breaking things when the accept limits were added.
Removes checking whitelisted behavior (which will be removed, the
difference in behavior here makes little sense) and no longer
requires that blocks at the same work as our tip be dropped if not
requested (in part because we *do* request those blocks).
2530bf2 net: Add missing lock in ProcessHeadersMessage(...) (practicalswift)
Pull request description:
Add missing lock in `ProcessHeadersMessage(...)`.
Reading the variable `mapBlockIndex` requires holding the mutex `cs_main`.
The new "Disconnect outbound peers relaying invalid headers" code added in commit 37886d5e2f and merged as part of #11568 two days ago did not lock `cs_main` prior to accessing `mapBlockIndex`.
Tree-SHA512: b799c234be8043d036183a00bc7867bbf3bd7ffe3baa94c88529da3b3cd0571c31ed11dadfaf29c5b8498341d6d0a3c928029a43b69f3267ef263682c91563a3
Nowhere else in the protocol do we send headers which are for
blocks we have not fully validated except in response to getheaders
messages with a null locator. On my public node I have not seen any
such request (whether for an invalid block or not) in at least two
years of debug.log output, indicating that this should have minimal
impact.
Reading the variable mapBlockIndex requires holding the mutex cs_main.
The new "Disconnect outbound peers relaying invalid headers" code
added in commit 37886d5e2f and merged
as part of #11568 two days ago did not lock cs_main prior to accessing
mapBlockIndex.
cc5c39d [Build] Add AM_OBJCXXFLAGS and QT_PIE_FLAGS to OBJCXXFLAGS to future-proof darwin targets (fanquake)
f8c6697 Fix automake warnings when running autogen.sh (Evan Klitzke)
Pull request description:
Adjusted @eklitzke's commit to completely remove GZIP_ENV.
Added a commit to address OBJCXXFLAGS.
Rebased on master.
Relevant info from @theuni & #11013 below.
--------
GZIP_ENV was indeed added for determinism, but gitian exports this as needed, so it's not really necessary. I'd rather just remove it.
The mm.o rule was added to support XCode 4.2's ancient version of automake. That's irrelevant now, so it makes sense to remove that too.
All darwin targets are PIE by default, so we don't technically need the flags, but I'd be more comfortable if we hooked up the OBJCXXFLAGS in case future ones are added.
--------
The second commit addresses the last point, but could probably use a better commit message.
These warnings are removed from autogen output:
```
Makefile.am:12: warning: user variable 'GZIP_ENV' defined here ...
/usr/local/Cellar/automake/1.15.1/share/automake-1.15/am/distdir.am: ... overrides Automake variable 'GZIP_ENV' defined here
src/Makefile.am: installing 'build-aux/depcomp'
src/Makefile.am:503: warning: user target '.mm.o' defined here ...
/usr/local/Cellar/automake/1.15.1/share/automake-1.15/am/depend2.am: ... overrides Automake target '.mm.o' defined here
```
Tree-SHA512: bd59df5f6d3aafe35d5e36925bfe61cc71e774583a0438d7dd946c9e7ecf6e59d42f90a58b8cfef0faa404c81050338ad4cefe721b4a949af881e73b6ab254d4
37886d5e2 Disconnect outbound peers relaying invalid headers (Suhas Daftuar)
4637f1852 moveonly: factor out headers processing into separate function (Suhas Daftuar)
Pull request description:
Alternate to #11446.
Disconnect outbound (non-manual) peers that serve us block headers that are already known to be invalid, but exempt compact block announcements from such disconnects.
We restrict disconnection to outbound peers that are using up an outbound connection slot, because we rely on those peers to give us connectivity to the honest network (our inbound peers are not chosen by us and hence could all be from an attacker/sybil). Maintaining connectivity to peers that serve us invalid headers is sometimes desirable, eg after a soft-fork, to protect unupgraded software from being partitioned off the honest network, so we prefer to only disconnect when necessary.
Compact block announcements are exempted from this logic to comply with BIP 152, which explicitly permits nodes to relay compact blocks before fully validating them.
Tree-SHA512: 3ea88e4ccc1184f292a85b17f800d401d2c3806fefc7ad5429d05d6872c53acfa5751e3df83ce6b9c0060ab289511ed70ae1323d140ccc5b12e3c8da6de49936
1. "If a pull request is not to be considered for merging (yet), please
prefix the ..."
2. If a particular commit references another issue, please add the reference. For
example: `refs #1234` or `fixes #4321`.
fd3a2f3 [tests] Add fuzz testing for BlockTransactions and BlockTransactionsRequest (practicalswift)
Pull request description:
The `BlockTransactions` deserialization code is reachable with tainted data via `ProcessMessage(…, "BLOCKTXN", vRecv [tainted], …)`.
The same thing applies to `BlockTransactionsRequest` which is reachable via `"GETBLOCKTXN"`.
Tree-SHA512: 64560ea344bc6145b940472f99866b808725745b060dedfb315be400bd94e55399f50b982149645bd7af7ed9935fd28751d7daf0d3f94a8e2ed3bc52e3325ffb
e065249 Add unit test for outbound peer eviction (Suhas Daftuar)
5a6d00c Permit disconnection of outbound peers on bad/slow chains (Suhas Daftuar)
c60fd71 Disconnecting from bad outbound peers in IBD (Suhas Daftuar)
Pull request description:
The first commit will disconnect an outbound peer that serves us a headers chain with insufficient work while we're in IBD.
The second commit introduces a way to disconnect outbound peers whose chains fall out of sync with ours:
For a given outbound peer, we check whether their best known block (which is known from the blocks they announce to us) has at least as much work as our tip. If it doesn't, we set a 20 minute timeout, and if we still haven't heard about a block with as much work as our tip had when we set the timeout, then we send a single getheaders message, and wait 2 more minutes. If after two minutes their best known block has insufficient work, we disconnect that peer.
We protect 4 of our outbound peers (who provide some "good" headers chains, ie a chain with at least as much work as our tip at some point) from being subject to this logic, to prevent excessive network topology changes as a result of this algorithm, while still ensuring that we have a reasonable number of nodes not known to be on bogus chains.
We also don't require our peers to be on the same chain as us, to prevent accidental partitioning of the network in the event of a chain split. Note that if our peers are ever on a more work chain than our tip, then we will download and validate it, and then either reorg to it, or learn of a consensus incompatibility with that peer and disconnect. This PR is designed to protect against peers that are on a less work chain which we may never try to download and validate.
Tree-SHA512: 2e0169a1dd8a7fb95980573ac4a201924bffdd724c19afcab5efcef076fdbe1f2cec7dc5f5d7e0a6327216f56d3828884f73642e00c8534b56ec2bb4c854a656
Currently we have no rotation of outbound peers. If an outbound peer
stops serving us blocks, or is on a consensus-incompatible chain with
less work than our tip (but otherwise valid headers), then we will never
disconnect that peer, even though that peer is using one of our 8
outbound connection slots. Because we rely on our outbound peers to
find an honest node in order to reach consensus, allowing an
incompatible peer to occupy one of those slots is undesirable,
particularly if it is possible for all such slots to be occupied by such
peers.
Protect against this by always checking to see if a peer's best known
block has less work than our tip, and if so, set a 20 minute timeout --
if the peer is still not known to have caught up to a chain with as much
work as ours after 20 minutes, then send a single getheaders message,
wait 2 more minutes, and if a better header hasn't been received by then,
disconnect that peer.
Note:
- we do not require that our peer sync to the same tip as ours, just an
equal or greater work tip. (Doing otherwise would risk partitioning the
network in the event of a chain split, and is also unnecessary.)
- we pick 4 of our outbound peers and do not subject them to this logic,
to be more conservative. We don't wish to permit temporary network
issues (or an attacker) to excessively disrupt network topology.
fa81534 Add share/rpcuser to dist. source code archive (MarcoFalke)
Pull request description:
As the legacy rpcuser and rpcpassword are deprected since 0.12.0, we should actually include the script to generate the new auth pair in the distributed source code archive.
Ref: #6753
(Tagging for backport, since it is a trivial bugfix)
Tree-SHA512: f2737957a92396444573f41071a785be5fb318df9efeb3ade7e56b3b56d512e5f9ca36723365fe5be8aaee69c5e8d8ed1178510bf02186c848b3910ee001ecb9
Change suggested by Cory Fields <cory-nospam-@coryfields.com> who noticed
listsinceblock would ignore invalid block hashes causing it to return a
completely unfiltered list of transactions.
Add comments and
- Verify sending to a account causes getaccountaddress to generate new addresses.
- Verify sending to a account causes getreceivedbyaccount to return amount received.
- Verify ways setaccount updates the accounts of existing addresses.
6d51eaefe qa: Fix race condition in sendheaders.py (Suhas Daftuar)
c96b2e4f0 qa: Fix replace-by-fee race condition failures (Suhas Daftuar)
Pull request description:
I think #11407 broke replace-by-fee by introducing a race condition. I was observing frequent failures of replace-by-fee locally, always with a mempool sync failure (the sync call was added in #11407).
It appeared to me like there were two causes: sometimes the node would be in IBD and not request the transaction that was relayed; other times the blocks generated in make_utxo wouldn't have relayed quickly enough for the spend of the transaction to be accepted. I believe I've fixed both potential errors.
ping @instagibbs
Edit: I found a race condition in the sendheaders.py test, where if the verack from the python node wasn't processed before the first block in the test was generated, then no block announcement would go out to that peer, breaking the test. Fixed by adding a sync_with_ping after waiting for verack.
Tree-SHA512: 6ad160966e432c151c1ce6e88ae67e60e47123523bda3755cf7697a00e1a5ba38de8561751826e3d7cf0e492f8c2aec298e1b4de8424ebbaf497f099a1ef1d07
6b1891e2c Add Sent and Received information to the debug menu peer list (Aaron Golliver)
8e4aa35ff move human-readable byte formatting to guiutil (Aaron Golliver)
Pull request description:
Makes the peer list display how much you've uploaded/downloaded from each peer.
Here's a screenshot ~~[outdated](https://i.imgur.com/MhPbItp.png)~~, [current](https://i.imgur.com/K1htrVv.png) of how it looks. You can now sort to see who are the peers you've uploaded the most too.
I also moved `RPCConsole::FormatBytes` to `guiutil::formatBytes` so I could use it in the peerlist
Tree-SHA512: 8845ef406e4cbe7f981879a78c063542ce90f50f45c8fa3514ba3e6e1164b4c70bb2093c4e1cac268aef0328b7b63545bc1dfa435c227f28fdb4cb0a596800f5
01b52ce Add comment explaining forced processing of compact blocks (Suhas Daftuar)
08fd822 qa: add test for minchainwork use in acceptblock (Suhas Daftuar)
ce8cd7a Don't process unrequested, low-work blocks (Suhas Daftuar)
Pull request description:
A peer could try to waste our resources by sending us unrequested blocks with
low work (eg to fill up our disk). Since e265200 we no longer request blocks until we
know we're on a chain with more than nMinimumChainWork (our anti-DoS
threshold), but we would still process unrequested blocks that had more work
than our tip (which generally has low-work during IBD), even though we may not
yet have found a headers chain with sufficient work.
Fix this and add a test.
Tree-SHA512: 1a4fb0bbd78054b84683f995c8c3194dd44fa914dc351ae4379c7c1a6f83224f609f8b9c2d9dde28741426c6af008ffffea836d21aa31a5ebaa00f8e0f81229e
d23be30 [verify-commits] Allow revoked keys to expire (Matt Corallo)
Pull request description:
This should fix verify-commits on master.
Tree-SHA512: 9bfca41fdfcdb11f6d07fcbc80a7b2de37706051e963292e0fbb4c608f146c87b65ab1e8395792197b4a7099e89fa045f278a60276672f6540b68d5e15b5a4a7
A peer could try to waste our resources by sending us unrequested blocks with
low work, eg to fill up our disk. Since
e2652002b6 we no longer request blocks until we
know we're on a chain with more than nMinimumChainWork (our anti-DoS
threshold), but we would still process unrequested blocks that had more work
than our tip. This commit fixes that behavior.
7a5f930 Avoid slow transaction search with txindex enabled (João Barbosa)
Pull request description:
This is an alternative to #11507 where a slow search is not attempted (in any case) if `txindex` is enabled.
Tree-SHA512: e680621781a9241c0513ddd79d23b0b42f3ccec8a63ed1c926b35c43321c81c39a1028770397dd5070501dcf644d897026a2bd68a161a4b435f19227c1bbca48
478a89c Avoid opening copied wallet databases simultaneously (Russell Yanofsky)
Pull request description:
Make sure wallet databases have unique fileids. If they don't, throw an error. BDB caches do not work properly when more than one open database has the same fileid, because values written to one database may show up in reads to other databases.
Bitcoin will never create different databases with the same fileid, but users can create them by manually copying database files.
BDB caching bug was reported by @dooglus in https://github.com/bitcoin/bitcoin/issues/11429
Tree-SHA512: e7635dc81a181801f42324b72fe9e0a2a7dd00b1dcf5abcbf27fa50938eb9a1fc3065c2321326c3456c48c29ae6504353b02f3d46e6eb2f7b09e46d8fe24388d
132d322 Remove my testnet DNS seed as I currently don't have the capacity to keep it up to date. (Andreas Schildbach)
Pull request description:
…keep it up to date.
I suggest to consider this for backporting.
Tree-SHA512: 2aadb60e9ecab1756f835e62ab784124c61a1fa59380d299ce482f826169da9ed8b7f8615ea9d8d3484eac0b32a9e974685ddc51723c7782a472bc0386243898
Make sure wallet databases have unique fileids. If they don't, throw an error.
BDB caches do not work properly when more than one open database has the same
fileid, because values written to one database may show up in reads to other
databases.
Bitcoin will never create different databases with the same fileid, but users
can create them by manually copying database files.
BDB caching bug was reported by Chris Moore <dooglus@gmail.com>
https://github.com/bitcoin/bitcoin/issues/11429Fixes#11429
3d1c311 Revert "travis: filter out pyenv" (Cory Fields)
a86e81b travis: move back to the minimal image (Cory Fields)
Pull request description:
The most recent update replaced the minimal image with a large one for the
'generic' image. Switching back to 'minimal' should reduce dependencies and
maybe speed us up some.
It should also eliminiate the need for aa2e0f09e.
Tree-SHA512: 0e5f3e97e8d97add07ea228bc5ce1e51e8e069950dbb2871a7eece297995f20b671afdf1c68211ce404cba3ba393d61dfef30ed54d46d6805fde9388f6b4455e
The most recent update replaced the minimal image with a large one for the
'generic' image. Switching back to 'minimal' should reduce dependencies and
maybe speed us up some.
It should also eliminiate the need for aa2e0f09e.