Output instances of "BloomFilter" changed to "Bloom filter", in accordance with Wikipedia standard notation:
https://en.wikipedia.org/wiki/Bloom_filter
also to sync with the majority of cases in the self-same file
CVectorWriter is useful for overwriting or appending an existing byte vector.
CNetMsgMaker is a shortcut for creating messages on-the-fly which are suitable
for pushing to CConnman.
Remove the nType and nVersion as parameters to all serialization methods
and functions. There is only one place where it's read and has an impact
(in CAddress), and even there it does not impact any of the recursively
invoked serializers.
Instead, the few places that need nType or nVersion are changed to read
it directly from the stream object, through GetType() and GetVersion()
methods which are added to all stream classes.
Given that in default GetSerializeSize implementations created by
ADD_SERIALIZE_METHODS we're already using CSizeComputer(), get rid
of the specialized GetSerializeSize methods everywhere, and just use
CSizeComputer. This removes a lot of code which isn't actually used
anywhere.
For CCompactSize and CVarInt this actually removes a more efficient
size computing algorithm, which is brought back in a later commit.
Three categories of modifications:
1)
1 instance of 'The Bitcoin Core developers \n',
1 instance of 'the Bitcoin Core developers\n',
3 instances of 'Bitcoin Core Developers\n', and
12 instances of 'The Bitcoin developers\n'
are made uniform with the 443 instances of 'The Bitcoin Core developers\n'
2)
3 instances of 'BitPay, Inc\.\n' are made uniform with the other 6
instances of 'BitPay Inc\.\n'
3)
4 instances where there was no '(c)' between the 'Copyright' and the year
where it deviates from the style of the local directory.
The changes here are dense and subtle, but hopefully all is more explicit
than before.
- CConnman is now in charge of sending data rather than the nodes themselves.
This is necessary because many decisions need to be made with all nodes in
mind, and a model that requires the nodes calling up to their manager quickly
turns to spaghetti.
- The per-node-serializer (ssSend) has been replaced with a (quasi-)const
send-version. Since the send version for serialization can only change once
per connection, we now explicitly tag messages with INIT_PROTO_VERSION if
they are sent before the handshake. With this done, there's no need to lock
for access to nSendVersion.
Also, a new stream is used for each message, so there's no need to lock
during the serialization process.
- This takes care of accounting for optimistic sends, so the
nOptimisticBytesWritten hack can be removed.
- -dropmessagestest and -fuzzmessagestest have not been preserved, as I suspect
they haven't been used in years.
- Use the python standard logging library
- Run all tests and report all failing test-cases (rather than stop after one test case fails)
- If output is different from expected output, log a contextual diff.
Refer to the right file in the top-level README.md.
Having only one file with test documentation saves some confusion about
where things are documented.
GetTotalBlocksEstimate is no longer used and it was the only thing
the checkpoint tests were testing.
Since checkpoints are on their way out it makes more sense to remove
the test file than to cook up a new pointless test.
This splits the output comparison for `bitcoin-tx` into two steps:
- First, check for data mismatch, parsing the data as json or hex
depending on the extension of the output file
- Then, check if the literal string matches
For either of these cases give a different error.
This prevents wild goose chases when e.g. a trailing space doesn't match
exactly, and makes sure that both test output and examples are valid
data of the purported format.
Add a pool for locked memory chunks, replacing LockedPageManager.
This is something I've been wanting to do for a long time. The current
approach of locking objects where they happen to be on the stack or heap
in-place causes a lot of mlock/munlock system call overhead, slowing
down any handling of keys.
Also locked memory is a limited resource on many operating systems (and
using a lot of it bogs down the system), so the previous approach of
locking every page that may contain any key information (but also other
information) is wasteful.
Makes it an error to use flags that have not been defined
on the libconsensus API.
There has been some confusion as to what pass to libconsensus, and
(combined with mention in the release notes) this should clear it up.
Using undocumented flags is a risk because their meaning,
and what combinations are allowed, changes from release to release.
E.g. it is no longer possible to pass (CLEANSTACK | P2SH) without
running into an assertion after the segwit changes.
There are only a few uses of `insecure_random` outside the tests.
This PR replaces uses of insecure_random (and its accompanying global
state) in the core code with an FastRandomContext that is automatically
seeded on creation.
This is meant to be used for inner loops. The FastRandomContext
can be in the outer scope, or the class itself, then rand32() is used
inside the loop. Useful e.g. for pushing addresses in CNode or the fee
rounding, or randomization for coin selection.
As a context is created per purpose, thus it gets rid of
cross-thread unprotected shared usage of a single set of globals, this
should also get rid of the potential race conditions.
- I'd say TxMempool::check is not called enough to warrant using a special
fast random context, this is switched to GetRand() (open for
discussion...)
- The use of `insecure_rand` in ConnectThroughProxy has been replaced by
an atomic integer counter. The only goal here is to have a different
credentials pair for each connection to go on a different Tor circuit,
it does not need to be random nor unpredictable.
- To avoid having a FastRandomContext on every CNode, the context is
passed into PushAddress as appropriate.
There remains an insecure_random for test usage in `test_random.h`.