Move mempool rejections to debug category `mempoolrej`, to make it possible
to show them without enabling the entire category `mempool` which is
high volume.
Previously various user-facing strings have used inconsistent currency units "BTC",
"btc" and "bitcoins". This adds a single constant and uses it for each reference to
the currency unit.
Also adds a description of the unit for --maxtxfee, and adds the missing "amount"
field description to the (deprecated) move RPC command.
Fixes#2007
This checks to see if the system clock appears to be bad and gives a
helpful error message. If the user's clock is set incorrectly, hopefully
they'll abort, fix it, and then save themselves a fruitless resync.
Introduce a PlatformStyle to handle platform-specific customization of
the UI.
This replaces 'scicon', as well as #ifdefs to determine whether to place
icons on buttons.
The selected PlatformStyle defaults to the platform that the application
was compiled on, but can be overridden from the command line with
`-uiplatform=<x>`.
Also fixes the warning from #6328.
Prevents stomping on debug logs in datadirs that are locked by other
instances and lost parameter interaction messages that can get wiped by
ShrinkDebugFile().
The log is now opened explicitly and all emitted messages are buffered
until this open occurs. The version message and log cut have also been
moved to the earliest possible sensible location.
To determine the default for `-par`, the number of script verification
threads, use [boost:🧵:physical_concurrency()](http://www.boost.org/doc/libs/1_58_0/doc/html/thread/thread_management.html#thread.thread_management.thread.physical_concurrency)
which counts only physical cores, not virtual cores.
Virtual cores are roughly a set of cached registers to avoid context
switches while threading, they cannot actually perform work, so spawning
a verification thread for them could even reduce efficiency and will put
undue load on the system.
Should fix issue #6358, as well as some other reported system overload
issues, especially on Intel processors.
The function was only introduced in boost 1.56, so provide a utility
function `GetNumCores` to fall back for older Boost versions.
New, undocumented-on-purpose -mocktime=timestamp command-line
argument to startup with mocktime set. Needed because
time-related blockchain sanity checks are done on startup, before a
test has a chance to make a setmocktime RPC call.
And changed the setmocktime RPC call so calling it will not result in
currently connected peers being disconnected due to inactivity timeouts.
Make it possible to opt-out of the centralized alert system by providing
an option `-noalerts` or `-alerts=0`. The default remains unchanged.
This is a gentler form of #6260, in which I went a bit overboard by
removing the alert system completely.
I intend to add this to the GUI options in another pull after this.
This sets aside a number of connection slots for whitelisted peers,
useful for ensuring your local users and miners can always get in,
even if your limit on inbound connections has already been reached.
Simplify and make the code in AppInit2 more clear.
This provides a straightforward flow, gets rid of .count() (which makes
it possible to override an earlier provided proxy option to nothing), as
well as comments the different cases.
Do not translate -help-debug options, Many technical terms, and
only a very small audience, so is unnecessary stress to translators.
Brings the code up to date with translation string policy in
`doc/translation_strings_policy.md`.
Also remove no-longer-relevant "In this mode -genproclimit controls how
many blocks are generated immediately." (as of #5957) from regtest help.
The partition checking code was using chainActive timestamps
to detect partitioning; with headers-first syncing, it should use
(and with this pull request, does use) pIndexBestHeader timestamps.
Fixes issue #6251
In some corner cases, it may be possible for recent blocks to end up in
the same block file as much older blocks. Previously, the pruning code
would stop looking for files to remove upon first encountering a file
containing a block that cannot be pruned, now it will keep looking for
candidate files until the target is met and all other criteria are
satisfied.
This can result in a noncontiguous set of block files (by number) on
disk, which is fine except for during some reindex corner cases, so
make reindex preparation smarter such that we keep the data we can
actually use and throw away the rest. This allows pruning to work
correctly while downloading any blocks needed during the reindex.
To protect privacy, do not use UPNP when a proxy is set. The user may
still specify -listen=1 to listen locally (for a hidden service), so
don't rely on this happening through -listen.
Fixes#2927.
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
Connecting the chain can take quite a while.
All the while it is still showing `Loading wallet...`.
Add an init message to inform the user what is happening.
libsecp256k1's API changed, so update key.cpp to use it.
Libsecp256k1 now has explicit context objects, which makes it completely thread-safe.
In turn, keep an explicit context object in key.cpp, which is explicitly initialized
destroyed. This is not really pretty now, but it's more efficient than the static
initialized object in key.cpp (which made for example bitcoin-tx slow, as for most of
its calls, libsecp256k1 wasn't actually needed).
This also brings in the new blinding support in libsecp256k1. By passing in a random
seed, temporary variables during the elliptic curve computations are altered, in such
a way that if an attacker does not know the blind, observing the internal operations
leaks less information about the keys used. This was implemented by Greg Maxwell.
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
According to Tor's extensions to the SOCKS protocol
(https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt)
it is possible to perform stream isolation by providing authentication
to the proxy. Each set of credentials will create a new circuit,
which makes it harder to correlate connections.
This patch adds an option, `-proxyrandomize` (on by default) that randomizes
credentials for every outgoing connection, thus creating a new circuit.
2015-03-16 15:29:59 SOCKS5 Sending proxy authentication 3842137544:3256031132