Changing NOP3 op name to OP_CHECKSEQUENCEVERIFY, renaming instances of OP_NOP3 in script_tests.json to CHECKSEQUENCEVERIFY
Cleaning up NOP3 comment
Re-adding test cases that were accidentally deleted, removing dupicated test case, fixing formatting
Removing re-labeling of OP_NOP3 to OP_CSV
Fixing whitespace issues
- Ban/Unban/ClearBan call uiInterface.BannedListChanged() as necessary
- Ban/Unban/ClearBan sync to disk if the operation is user-invoked
- Mark node for disconnection automatically when banning
- Lock cs_vNodes while setting disconnected
- Don't spin in a tight loop while setting disconnected
In my ever-growing list of test failures, I was seeing this one intermittently.
```
Running 2nd level testscript pruning.py...
Initializing test directory /tmp/testY5ypCv
Warning! This test requires 4GB of disk space and takes over 30 mins (up to 2 hours)
Mining a big blockchain of 995 blocks
Check that we haven't started pruning yet because we're below PruneAfterHeight
Success
Though we're already using more than 550MB, current usage: 587
Mining 25 more blocks should cause the first block file to be pruned
Assertion failed: blk00000.dat not pruned when it should be
File "/home/error/bitcoinxt-0.11D/qa/rpc-tests/test_framework/test_framework.py", line 118, in main
self.run_test()
File "/home/error/bitcoinxt-0.11D/qa/rpc-tests/pruning.py", line 272, in run_test
self.test_height_min()
File "/home/error/bitcoinxt-0.11D/qa/rpc-tests/pruning.py", line 94, in test_height_min
raise AssertionError("blk00000.dat not pruned when it should be")
Stopping nodes
Failed
```
After digging into the test, I found that the code is waiting 10 seconds for blk00000.dat to be deleted, and then throwing this failure if it still exists after 10 seconds.
I increased this amount, had the script print the actual time taken, and ran the test a few more times. The time taken ranged between 8 to 12 seconds. So, I feel that this timeout is too short.
After changing the timeout to 30 seconds, the test passes consistently.
(cherry picked from commit 3469911c89a48dd2fefe4d1c2a0c176256e14ee0)
Move the version reporting to Wallet::Verify, before starting
verification of the wallet.
This removes the dependency of init on a specific wallet database
library.
A further, trivial step towards resolving #7965.
166e4b0 Notify other serviceQueue thread we are finished to prevent deadlocks. (Pavel Janík)
db18ab2 Reenable multithread scheduler test. (Pavel Janík)
I made a subclass of QMessageBox that disables the send button in
exec() and starts a timer that calls a slot to re-enable it after a
configurable delay.
It also has a countdown in the send/yes button while it is disabled
to hint to the user why the send button is disabled (and that it is
actually supposed to be disabled).
* The "ERROR" was printed far too often during normal operation for what was not an error.
* Makes the Socks5() connect failure similar to the IP connect failure in debug.log.
Before:
`2016-05-09 00:15:00 ERROR: Proxy error: host unreachable`
After:
`2016-05-09 00:15:00 Socks5() connect to t6xj6wilh4ytvcs7.onion:18333 failed: host unreachable"`
It looks like travis is using the `travis.yml` from the branch, but runs
the test script from the branch merged into master. This causes
pull requests created before the QA tests python 3 transition to fail.
This temporarily reverts fa05e22e91
(#7851). It can be restored when this is no longer an issue.
The example local paths for "Building fully offline" have an extraneous ".git". This caused an error when trying to run gbuild, like this
fatal: '/home/user/bitcoin.git' does not appear to be a git repository
fatal: Could not read from remote repository.
This commit fixes that.
Locking for each operation here is unnecessary, and solves the wrong problem.
Additionally, it introduces a problem when cs_vNodes is held in an owning
class, to which invididual CNodeRefs won't have access.
These should be weak pointers anyway, once vNodes contain shared pointers.
Rather than using a refcounting class, use a 3-step process instead.
1. Lock vNodes long enough to snapshot the fields necessary for comparing
2. Unlock and do the comparison
3. Re-lock and mark the resulting node for disconnection if it still exists