Dbwrapper used GetSerializeSize() to compute the size of the buffer
to preallocate. For some cases (specifically: CCoins) this requires
a costly compression call. Avoid this by just using fixed size
preallocations instead.
To get the advantages of faster GetSerializeSize() implementations
back that were removed in "Make GetSerializeSize a wrapper on top of
CSizeComputer", reintroduce them in the few places in the form of a
specialized Serialize() implementation. This actually gets us in a
better state than before, as these even get used when they're invoked
indirectly in the serialization of another object.
The CSerAction's ForRead() method does not depend on any runtime
data, so guarantee that requests to it can be optimized out by
making it constexpr.
Suggested by Cory Fields.
Remove the nType and nVersion as parameters to all serialization methods
and functions. There is only one place where it's read and has an impact
(in CAddress), and even there it does not impact any of the recursively
invoked serializers.
Instead, the few places that need nType or nVersion are changed to read
it directly from the stream object, through GetType() and GetVersion()
methods which are added to all stream classes.
Given that in default GetSerializeSize implementations created by
ADD_SERIALIZE_METHODS we're already using CSizeComputer(), get rid
of the specialized GetSerializeSize methods everywhere, and just use
CSizeComputer. This removes a lot of code which isn't actually used
anywhere.
For CCompactSize and CVarInt this actually removes a more efficient
size computing algorithm, which is brought back in a later commit.
The stream implementations had two cascading layers (the upper one
with operator<< and operator>>, and a lower one with read and write).
The lower layer's functions are never cascaded (nor should they, as
they should only be used from the higher layer), so make them return
void instead.
Three categories of modifications:
1)
1 instance of 'The Bitcoin Core developers \n',
1 instance of 'the Bitcoin Core developers\n',
3 instances of 'Bitcoin Core Developers\n', and
12 instances of 'The Bitcoin developers\n'
are made uniform with the 443 instances of 'The Bitcoin Core developers\n'
2)
3 instances of 'BitPay, Inc\.\n' are made uniform with the other 6
instances of 'BitPay Inc\.\n'
3)
4 instances where there was no '(c)' between the 'Copyright' and the year
where it deviates from the style of the local directory.
- `--help`, `--version` etc should exit with `0` i.e. no error ("not enough args" case should still trigger an error)
- error reading config file should exit with `1`
Slightly refactor AppInitRPC/AppInitRawTx to return standard exit codes (EXIT_FAILURE/EXIT_SUCCESS) or CONTINUE_EXECUTION (-1)
The changes here are dense and subtle, but hopefully all is more explicit
than before.
- CConnman is now in charge of sending data rather than the nodes themselves.
This is necessary because many decisions need to be made with all nodes in
mind, and a model that requires the nodes calling up to their manager quickly
turns to spaghetti.
- The per-node-serializer (ssSend) has been replaced with a (quasi-)const
send-version. Since the send version for serialization can only change once
per connection, we now explicitly tag messages with INIT_PROTO_VERSION if
they are sent before the handshake. With this done, there's no need to lock
for access to nSendVersion.
Also, a new stream is used for each message, so there's no need to lock
during the serialization process.
- This takes care of accounting for optimistic sends, so the
nOptimisticBytesWritten hack can be removed.
- -dropmessagestest and -fuzzmessagestest have not been preserved, as I suspect
they haven't been used in years.
This allows future software that would relay compact blocks before
full validation to announce only to peers that will not ban if the
block turns out to be invalid.
Check for unreasonable alloc size in LockedPool rather than lancing through new
Arenas until we improbably find one worthy of the quixotic request or the system
can support no more Arenas.
- Use the python standard logging library
- Run all tests and report all failing test-cases (rather than stop after one test case fails)
- If output is different from expected output, log a contextual diff.
Refer to the right file in the top-level README.md.
Having only one file with test documentation saves some confusion about
where things are documented.
GetTotalBlocksEstimate is no longer used and it was the only thing
the checkpoint tests were testing.
Since checkpoints are on their way out it makes more sense to remove
the test file than to cook up a new pointless test.
This introduces a 'minimum chain work' chainparam which is intended
to be the known amount of work in the chain for the network at the
time of software release. If you don't have this much work, you're
not yet caught up.
This is used instead of the count of blocks test from checkpoints.
This criteria is trivial to keep updated as there is no element of
subjectivity, trust, or position dependence to it. It is also a more
reliable metric of sync status than a block count.
Fixes newly initialized bloom filters being
constructed with isEmpty(false), which still
works but loses the possible speedup when
checking for key membership in an empty filter.
This will result in many more calls to CheckBlockIndex when
connecting a list of headers (eg in ::HEADERS messages processing)
but its only enabled in debug mode, and that should mostly just be
during IBD, so it should be OK.