- Avoids string typos (by making the compiler check)
- Makes it easier to grep for handling/generation of a certain message type
- Refer directly to documentation by following the symbol in IDE
- Move list of valid message types to protocol.cpp:
protocol.cpp is a more appropriate place for this, and having
the array there makes it easier to keep things consistent.
Mempool requests use a fair amount of bandwidth when the mempool is large,
disconnecting peers using them follows the same logic as disconnecting
peers fetching historical blocks.
One test in AcceptToMemoryPool was to compare a transaction's fee
agains the value returned by GetMinRelayFee. This value was zero for
all small transactions. For larger transactions (between
DEFAULT_BLOCK_PRIORITY_SIZE and MAX_STANDARD_TX_SIZE), this function
was preventing low fee transactions from ever being accepted.
With this function removed, we will now allow transactions in that range
with fees (including modifications via PrioritiseTransaction) below
the minRelayTxFee, provided that they have sufficient priority.
Store sum of legacy and P2SH sig op counts. This is calculated in AcceptToMemory pool and storing it saves redoing the expensive calculation in block template creation.
But keep translating them in the GUI.
This - necessarily - requires duplication of a few messages.
Alternative take on #7134, that keeps the translations from being wiped.
Also document GetWarnings() input argument.
Fixes#5895.
Mruset setInventoryKnown was reduced to a remarkably small 1000
entries as a side effect of sendbuffer size reductions in 2012.
This removes setInventoryKnown filtering from merkleBlock responses
because false positives there are especially unattractive and
also because I'm not sure if there aren't race conditions around
the relay pool that would cause some transactions there to
be suppressed. (Also, ProcessGetData was accessing
setInventoryKnown without taking the required lock.)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
For each 'bit' in the filter we really maintain 2 bits, which store either:
0: not set
1-3: set in generation N
After (nElements / 2) insertions, we switch to a new generation, and wipe
entries which already had the new generation number, effectively switching
from the last 1.5 * nElements set to the last 1.0 * nElements set.
This is 25% more space efficient than the previous implementation, and can
(at peak) store 1.5 times the requested amount of history (though only
1.0 times the requested history is guaranteed).
The existing unit tests should be sufficient.
This switches the Merkle tree logic for blocks to one that runs in constant (small) space.
The old code is moved to tests, and a new test is added that for various combinations of
block sizes, transaction positions to compute a branch for, and mutations:
* Verifies that the old code and new code agree for the Merkle root.
* Verifies that the old code and new code agree for the Merkle branch.
* Verifies that the computed Merkle branch is valid.
* Verifies that mutations don't change the Merkle root.
* Verifies that mutations are correctly detected.
This makes sure that retransmits by a whitelisted peer also actually
result in a retransmit.
Further, this changes the logic to never relay in case we would assign
a DoS score, as we expect to get DoS banned ourselves as a result.
Previously peers which implement a protocol version less than NO_BLOOM_VERSION
would not be disconnected for sending a filter command, regardless of the
peerbloomfilter option.
Many node operators do not wish to provide expensive bloom filtering for SPV
clients, previously they had to cherry pick the commit which enabled the
disconnect logic.
The default should remain false until a sufficient percent of SPV clients
have updated.
1) Chainparams: Explicit CChainParams arg for main:
-AcceptBlock
-AcceptBlockHeader
-ActivateBestChain
-ConnectTip
-InitBlockIndex
-LoadExternalBlockFile
-VerifyDB parametric constructor
2) Also pickup more Params()\. in main.cpp
3) Pass nPruneAfterHeight explicitly to new FindFilesToPrune() in main.cpp
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.
mapAlreadyAskedFor does not keep track of which peer has a request queued for a
particular tx. As a result, a peer can blind a node to a tx indefinitely by
sending many invs for the same tx, and then never replying to getdatas for it.
Each inv received will be placed 2 minutes farther back in mapAlreadyAskedFor,
so a short message containing 10 invs would render that tx unavailable for 20
minutes.
This is fixed by disallowing a peer from having more than one entry for a
particular inv in mapAlreadyAskedFor at a time.
Previously in blocks only mode all inv messages where type!=MSG_BLOCK would be
rejected without regard for whitelisting or whitelistalwaysrelay.
As such whitelisted peers would never send the transaction (which would be
processed).
Compute the value of inputs that already are in the chain at time of mempool entry and only increase priority due to aging for those inputs. This effectively changes the CTxMemPoolEntry's GetPriority calculation from an upper bound to a lower bound.
It's possible coins with the same hash exist when you create a duplicate coinbase, so previously we were reading from the database to make sure we had the old coins cached so if we were to spend the new ones, the old ones would also be spent. This pull instead just marks the new coins as not fresh if they are from a coinbase, so if they are spent they will be written all the way down to the database anyway overwriting any duplicates.