This reduces the rate of not founds by better matching the far
end expectations, it also improves privacy by removing the
ability to use getdata to probe for a node having a txn before
it has been relayed.
Previously the benchmark code used an integer division (%) with
a non-constant in the inner-loop. This is quite slow on many
processors, especially ones like ARM that lack a hardware divide.
Even on fairly recent x86_64 like haswell an integer division can
take something like 100 cycles-- making it comparable to the
runtime of siphash.
This change avoids the division by using bitmasking instead. This
was especially easy since the count was only increased by doubling.
This change also restarts the timing when the execution time was
very low this avoids mintimes of zero in cases where one execution
ends up below the timer resolution. It also reduces the impact of
the overhead on the final result.
The formatting of the prints is changed to not use scientific
notation make it more machine readable (in particular, gnuplot
croaks on the non-fixedpoint, and it doesn't sort correctly).
This also hoists out all the floating point divisions out of the
semi-hot path because it was easy to do so.
It might be prudent to break out the critical test into a macro
just to guarantee that it gets inlined. It might also make sense
to just save out the intermediate counts and times and get the
floating point completely out of the timing loop (because e.g.
on hardware without a fast hardware FPU like some ARM it will
still be slow enough to distort the results). I haven't done
either of these in this commit.
Fixing formatting
Adding test case into automatically generated test case set
Clean up commits
removing extra whitespace from eol
Removing extra whitespace on macro line
If a node is offline failed outbound connection attempts will crank up
the addrman counter and effectively blow away our state.
This change reduces the problem by only counting attempts made while
the node believes it has outbound connections to at least two
netgroups.
Connect and addnode connections are also not counted, as there is no
reason to unequally penalize them for their more frequent
connections -- though there should be no real effect from this
unless their addnode configureation is later removed.
Wasteful repeated connection attempts while only a few connections are
up are avoided via nLastTry.
This is still somewhat incomplete protection because our outbound
peers could be down but not timed out or might all be on 'local'
networks (although the requirement for multiple netgroups helps).
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
The current logic for syncing headers may lead to lots of duplicate
getheaders requests being sent: If a new block arrives while the node
is in headers sync, it will send getheaders in response to the block
announcement. When the headers arrive, the message will be of maximum
size and so a follow-up request will be sent---all of that in addition
to the existing headers syncing. This will create a second "chain" of
getheaders requests. If more blocks arrive, this may even lead to
arbitrarily many parallel chains of redundant requests.
This patch changes the behaviour to only request more headers after a
maximum-sized message when it contained at least one unknown header.
This avoids sustaining parallel chains of redundant requests.
Note that this patch avoids the issues raised in the discussion of
https://github.com/bitcoin/bitcoin/pull/6821: There is no risk of the
node being permanently blocked. At the latest when a new block arrives
this will trigger a new getheaders request and restart syncing.
Verify that results correct (match known values), consistent (encrypt->decrypt
matches the original), and compatible with the previous openssl implementation.
Also check that failed encrypts/decrypts fail the exact same way as openssl.