Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
ca5f688 [QT] don't colorize icons on win and mac (Jonas Schnelli)
7247d10 [QT] use alert icon with tooltip insted of "(out of sync)" text (Jonas Schnelli)
51c7c70 [QT] remove frame to avoid double-frame situation in sendcoinsentry.ui (Jonas Schnelli)
2a6b844 [QT] change transaction amount and height in overview page (Jonas Schnelli)
In some cases (Travis), sources and build caches may be moved around in-between
builds, and we can't necessarily trust that everything is still intact.
This introduces pre-build checks that verify against stashed checksums.
Note that this will cause all sources to be re-downloaded, since cached sources
weren't trustworthy before this.
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
When the internal miner is enabled at the start of a new node, there
is an near instant assert in TestBlockValidity because its attempting
to mine a block before the top checkpoint.
Also avoids a data race around vNodes.
03c5687 appropriate response when trying to get a block in pruned mode (Jonas Schnelli)
1b2e555 add autoprune information to RPC "getblockchaininfo" (Jonas Schnelli)