Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
This makes sure that the event loop eventually terminates, even if an
event (like an open timeout, or a hanging connection) happens to be
holding it up.
Add a WaitExit() call to http's WorkQueue to make it delete the work
queue only when all worker threads stopped.
This fixes a problem that was reproducable by pressing Ctrl-C during
AppInit2:
```
/usr/include/boost/thread/pthread/condition_variable_fwd.hpp:81: boost::condition_variable::~condition_variable(): Assertion `!ret' failed.
/usr/include/boost/thread/pthread/mutex.hpp:108: boost::mutex::~mutex(): Assertion `!posix::pthread_mutex_destroy(&m)' failed.
```
I was assuming that `threadGroup->join_all();` would always have been
called when entering the Shutdown(). However this is not the case in
bitcoind's AppInit2-non-zero-exit case "was left out intentionally
here".
Shutting down the HTTP server currently breaks off all current requests.
This can create a race condition with RPC `stop` command, where the calling
process never receives confirmation.
This change removes the listening sockets on shutdown so that no new
requests can come in, but no longer breaks off requests in progress.
Meant to fix#6717.
CalculateMemPoolAncestors was always looping over a transaction's inputs
to find in-mempool parents. When adding a new transaction, this is the
correct behavior, but when removing a transaction, we want to use the
ancestor set that would be calculated by walking mapLinks (which should
in general be the same set, except during a reorg when the mempool is
in an inconsistent state, and the mapLinks-based calculation would be the
correct one).
* Raise the debug window when hidden behind other windows
* Switch to the debug window when on another virtual desktop
* Show the debug window when minimized
This change is a conceptual copy of 5ffaaba and 382e9e2
Assume that when a wallet transaction has a valid block hash and transaction position
in it, the transaction is actually there. We're already trusting wallet data in a
much more fundamental way anyway.
To prevent backward compatibility issues, a new record is used for storing the
block locator in the wallet. Old wallets will see a wallet file synchronized up
to the genesis block, and rescan automatically.
The two timeouts for the server and client, are essentially different:
- In the case of the server it should be a lower value to avoid clients
clogging up connection slots
- In the case of the client it should be a high value to accomedate slow
responses from the server, for example for slow queries or when the
lock is contended
Split the options into `-rpcservertimeout` and `-rpcclienttimeout` with
respective defaults of 30 and 900.
Associate with each CTxMemPoolEntry all the size/fees of descendant
mempool transactions. Sort mempool by max(feerate of entry, feerate
of descendants). Update statistics on-the-fly as transactions enter
or leave the mempool.
Also add ancestor and descendant limiting, so that transactions can
be rejected if the number or size of unconfirmed ancestors exceeds
a target, or if adding a transaction would cause some other mempool
entry to have too many (or too large) a set of unconfirmed in-
mempool descendants.
Ignore SIGPIPE on all non-win32 OSes, otherwise an unexpectedly disconnecting
RPC client will terminate the application. This problem was introduced
with the libhttp-based RPC server.
Fixes#6660.
The thin space QT html hack results in cut-off chars/nums after a line break.
Avoid word wrap line breaks by using a smaller font and a line break before each alternative value)
GetTransaction needs to lock cs_main until ReadBlockFromDisk completes, the data inside CBlockIndex's can change since pruning. This lock was held by all calls to GetTransaction except rest_tx.
- add missing NULL pointer checks
- add better comments and reorder some code in rpcconsole.cpp
- remove unneeded leftovers in bantable.cpp
- update bantable column sizes to prevent cutting of banned until
- remove banListChanged signal from client model
- directly call clientModel->getBanTableModel()->refresh() without the way
over clientModel->updateBanlist()
- also fix clearing peer detail window, when selecting (clicking)
peers in the ban list
Continues Johnathan Corgan's work.
Publishing multipart messages
Bugfix: Add missing zmq header includes
Bugfix: Adjust build system to link ZeroMQ code for Qt binaries
We log the cleanSubVer as part of connect. It is not uniquely more informative
than any of the other data we have about a peer, often less. It's also often
long now as well. There is no need to output it as part of mempoolrej,
AcceptToMemoryPool, or pong entries. Leaving it out makes our log entries
more uniform and consistent.
This patch changes the way the suffix (giving the requested data format) is
parsed for REST requests. Before, the string was split at '.'
characters and it was assumed that the second part was the suffix.
Now, we look for the last dot and use that to determine the suffix.
This allows for strings that contain dots (not used now, though), and
seems, in general, to be clearer and more intuitive.
Complete rescan is incompatible with pruning, but rescan is optional on
our wallet key import RPCs. Import on use is very useful in some common
situations in conjunction with pruning, e.g. merchant payment tracking.
This reenables importprivkey/importaddress/importpubkey when rescan
is not used.
In the future we should consider changing the rescan argument to allow depth
or date to allow limited rescanning when compatible with the retained
block depth.
Lets nodes advertise that they offer bloom filter support explicitly.
The protocol version bump allows SPV nodes to assume that NODE_BLOOM is
set if NODE_NETWORK is set for pre-70011 nodes.
Also adds an option to turn bloom filter support off for nodes which
advertise a version number >= 70011. Nodes attempting to use bloom
filters on such protocol versions are banned, and a later upgade
should drop nodes of an older version which attempt to use bloom
filters.
Much code stolen from Peter Todd.
Implements BIP 111