We can't change "softforks", but it seems far more logical to use tags
in an object rather than using an "id" field in an array.
For example, to get the csv status before, you need to iterate the
array to find the entry with 'id' field equal to "csv":
jq '.bip9_softforks | map(select(.id == "csv"))[] | .status'
Now:
jq '.bip9_softforks.csv.status'
There is no issue with fork names being incompatible with JSON tags,
since we're selecting them ourselves.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Replace the `bitcoin-cli -rpcwait` after spawning bitcoind
with our own loop that detects when bitcoind exits prematurely.
And if one node fails to start, stop the others.
This prevents a hang in such a case (see #7463).
Make RPC tests have a default block priority size of 50000 (the old default) so we can still use free transactions in RPC tests. When priority is eliminated, we will have to make a different change if we want to continue allowing free txs.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
mininode.py provides a framework for connecting to a bitcoin node over the p2p
network. NodeConn is the main object that manages connectivity to a node and
provides callbacks; the interface for those callbacks is defined by NodeConnCB.
Defined also are all data structures from bitcoin core that pass on the network
(CBlock, CTransaction, etc), along with de-/serialization functions.
maxblocksinflight.py is an example test using this framework that tests whether
a node is limiting the maximum number of in-flight block requests.
This also adds support to util.py for specifying the binary to use when
starting nodes (for tests that compare the behavior of different bitcoind
versions), and adds maxblocksinflight.py to the pull tester.
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
Immature coinbase spends are allowed in the memory pool if they can be mined in the next block.
They are not allowed in the memory pool if they cannot be mined in the next block.
This regression test tests those edge cases.
The regtest framework is local, so often there is no need to
discover our external IP. Setting -discover=0 in util.py works
around shutdown hang caused by GetExternalIP waiting in recv().
This avoids a race condition in which the connection was
made but the version handshake is not completed yet. In that
case transactions won't be broadcasted to a peer yet, and
the nodes will wait forever for their mempools to sync.
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.
Taught bitcoind to close the HTTP connection after it gets a 'stop' command,
to make it easier for the regression tests to cleanly stop.
Move bitcoinrpc files to correct location.
Tidied up the python-based regression tests.