Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
For when you need to keep track of the last N items
you've seen, and can tolerate some false-positives.
Rebased-by: Pieter Wuille <pieter.wuille@gmail.com>
This commit adds several tests to the script_invalid.json data which
exercise some edge conditions that are not currently being tested.
These are mainly being added to cover several cases a branch coverage
analysis of btcd showed are not already being covered, but given more
tests of edge conditions are always a good thing, I'm contributing
them upstream.
The test which is intended to prove that the script engine is properly
rejecting non-minimally encoded PUSHDATA4 data is using the wrong
opcode and value. The test is using 0x4f, which is OP_1NEGATE instead
of the desired 0x4e, which is OP_PUSHDATA4. Further, the push of data
is intended to be 256 bytes, but the value the test is using is
0x00100000 (4096), instead of the desired 0x00010000 (256).
This commit fixes both issues.
This was found while examining the branch coverage in btcd against only
these tests to help find missing branch coverage.
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
Ever since #5957 there has been the problem that older RPC test cases
(as can be found plenty in open pulls) use setgenerate() on regtest,
assuming a different interpretation of the arguments. Directly
generating a number of blocks has been split off into a new method
`generate` - however using `setgenerate` with the previous arguments will
result in spawning an unreasonable number of threads, and well, simply
not work as expected without clear indication of the error.
Add an error to point the user at the right method.
It's reasonable that automatic coin selection will not pick a zero
value txout, but they're actually spendable; and you should know
if you have them. Listing also makes them available to tools like
dust-b-gone.
- write "Bitcoins" uppercase
- replace secure/insecure for payment requests with
authenticated/unauthenticated
- change a translatable string for payment request expiry to match another
existing string to only get ONE resulting string to translate
On hosts that had spent some time with a failed internet connection their
nAttempts penalty was going through the roof (e.g. thousands for all peers)
and as a result the connect search was pegging the CPU and failing to get
more than a 4 connections after days of running (because it was taking so
long per try).
According to Tor's extensions to the SOCKS protocol
(https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt)
it is possible to perform stream isolation by providing authentication
to the proxy. Each set of credentials will create a new circuit,
which makes it harder to correlate connections.
This patch adds an option, `-proxyrandomize` (on by default) that randomizes
credentials for every outgoing connection, thus creating a new circuit.
2015-03-16 15:29:59 SOCKS5 Sending proxy authentication 3842137544:3256031132
Before and after was tested in Windows:
before:
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ("Microsoft Authenticode(tm) Root Authority")
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
after:
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "01" ("Microsoft Authenticode(tm) Root Authority")
() ()
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "01" () () ("Copyright (c) 1997 Microsoft Corp.",
"Microsoft Time Stamping Service Root", "Microsoft Corporation")
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "4a:19:d2:38:8c:82:59:1c:a5:5d:73:5f:15:5d:dc:a3" ()
() ("NO LIABILITY ACCEPTED, (c)97 VeriSign, Inc.", "VeriSign Time Stamping
Service Root", "VeriSign, Inc.")
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "e4:9e:fd:f3:3a:e8:0e:cf:a5:11:3e:19:a4:24:02:32" ()
() ("Class 3 Public Primary Certification Authority")
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.