The changes here are dense and subtle, but hopefully all is more explicit
than before.
- CConnman is now in charge of sending data rather than the nodes themselves.
This is necessary because many decisions need to be made with all nodes in
mind, and a model that requires the nodes calling up to their manager quickly
turns to spaghetti.
- The per-node-serializer (ssSend) has been replaced with a (quasi-)const
send-version. Since the send version for serialization can only change once
per connection, we now explicitly tag messages with INIT_PROTO_VERSION if
they are sent before the handshake. With this done, there's no need to lock
for access to nSendVersion.
Also, a new stream is used for each message, so there's no need to lock
during the serialization process.
- This takes care of accounting for optimistic sends, so the
nOptimisticBytesWritten hack can be removed.
- -dropmessagestest and -fuzzmessagestest have not been preserved, as I suspect
they haven't been used in years.
This allows future software that would relay compact blocks before
full validation to announce only to peers that will not ban if the
block turns out to be invalid.
Check for unreasonable alloc size in LockedPool rather than lancing through new
Arenas until we improbably find one worthy of the quixotic request or the system
can support no more Arenas.
- Use the python standard logging library
- Run all tests and report all failing test-cases (rather than stop after one test case fails)
- If output is different from expected output, log a contextual diff.
Refer to the right file in the top-level README.md.
Having only one file with test documentation saves some confusion about
where things are documented.
GetTotalBlocksEstimate is no longer used and it was the only thing
the checkpoint tests were testing.
Since checkpoints are on their way out it makes more sense to remove
the test file than to cook up a new pointless test.
This introduces a 'minimum chain work' chainparam which is intended
to be the known amount of work in the chain for the network at the
time of software release. If you don't have this much work, you're
not yet caught up.
This is used instead of the count of blocks test from checkpoints.
This criteria is trivial to keep updated as there is no element of
subjectivity, trust, or position dependence to it. It is also a more
reliable metric of sync status than a block count.
Fixes newly initialized bloom filters being
constructed with isEmpty(false), which still
works but loses the possible speedup when
checking for key membership in an empty filter.
This will result in many more calls to CheckBlockIndex when
connecting a list of headers (eg in ::HEADERS messages processing)
but its only enabled in debug mode, and that should mostly just be
during IBD, so it should be OK.
UnloadBlockIndex is only used during init if we end up reindexing
to clear our block state so that we can start over. However, at
that time no connections have been brought up as CConnman hasn't
been started yet, so all of the network processing state logic is
empty when its called.
Additionally, the initialization of the recentRejects set is moved
to InitPeerLogic.
This splits the output comparison for `bitcoin-tx` into two steps:
- First, check for data mismatch, parsing the data as json or hex
depending on the extension of the output file
- Then, check if the literal string matches
For either of these cases give a different error.
This prevents wild goose chases when e.g. a trailing space doesn't match
exactly, and makes sure that both test output and examples are valid
data of the purported format.
Recent discussion (in IRC meetings, and e.g. #8989) has shown a
preference for the default confirm target for smartfees to be 6 instead
of 2, to avoid overpaying fees for questionable gain.
6 is also a compromise between the GUI's pre-#8989 value of 25 and the
bitcoind `-txconfirmtarget` default of 2. These were unified in #8989,
but this has made the (overly expensive) default of 2 as GUI default.
```
getmemoryinfo
Returns an object containing information about memory usage.
Result:
{
"locked": { (json object) Information about locked memory manager
"used": xxxxx, (numeric) Number of bytes used
"free": xxxxx, (numeric) Number of bytes available in current arenas
"total": xxxxxxx, (numeric) Total number of bytes managed
"locked": xxxxxx, (numeric) Amount of bytes that succeeded locking. If this number is smaller than total, locking pages failed at some point and key data could be swapped to disk.
}
}
Examples:
> bitcoin-cli getmemoryinfo
> curl --user myusername --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getmemoryinfo", "params": [] }' -H 'content-type: text/plain;' http://127.0.0.1:8332/
```
Add a pool for locked memory chunks, replacing LockedPageManager.
This is something I've been wanting to do for a long time. The current
approach of locking objects where they happen to be on the stack or heap
in-place causes a lot of mlock/munlock system call overhead, slowing
down any handling of keys.
Also locked memory is a limited resource on many operating systems (and
using a lot of it bogs down the system), so the previous approach of
locking every page that may contain any key information (but also other
information) is wasteful.