For the per confirmation number tracking of data, introduce a scale factor so that in the longer horizones confirmations are bucketed together at a resolution of the scale. (instead of 1008 individual data points for each fee bucket, have 42 data points each covering 24 different confirmation values.. (1-24), (25-48), etc.. )
Store in fee estimate file the block span for which we were tracking estimates, so we know what targets we can successfully evaluate with the data in the file. When restarting use either this historical block span to set valid range of targets until our current span of tracking estimates is just as long.
Track the first time we seen txs in a block that we have been tracking in our mempool. Used to evaluate validity of fee estimates for different targets.
Change the logic of estimateSmartFee to check a 60% threshold at half the target, a 85% threshold at the target and a 95% threshold at double the target. Always check the shortest time horizon possible and ensure that estimates are monotonically decreasing. Add a conservative mode, which makes sure that the 95% threshold is also met at longer time horizons as well.
Track information the ranges of fee rates that were used to calculate the fee estimates (the last range of fee rates in which the data points met the threshold and the first to fail) and provide an RPC call to return this information.
Instead of stopping if it encounters a "sufficient" number of transactions which don't meet the threshold for being confirmed within the target, it keeps looking to add more transactions to see if there is a temporary blip in the data. This allows a smaller number of required data points.
Cleanup request from #10287.
Change "Test #:" comments to "Test:"
Change BOOST_CHECK(... = ...) to BOOST_CHECK_EQUAL(..., ...)
Remove three unnecessary if statements
vToFetch is never used after declaration. When checked if not empty,
evaluation is always false. Best case scenario this is optimized by the
compiler, worst case it wastes cpu cycles. It should be removed either
way.
84973d3 Merge #454: Remove residual parts from the schnorr expirement.
5e95bf2 Remove residual parts from the schnorr expirement.
cbc20b8 Merge #452: Minor optimizations to _scalar_inverse to save 4M
4cc8f52 Merge #437: Unroll secp256k1_fe_(get|set)_b32 to make them much faster.
465159c Further shorten the addition chain for scalar inversion.
a2b6b19 Fix benchmark print_number infinite loop.
8b7680a Unroll secp256k1_fe_(get|set)_b32 for 10x26.
aa84990 Unroll secp256k1_fe_(get|set)_b32 for 5x52.
cf12fa1 Minor optimizations to _scalar_inverse to save 4M
1199492 Merge #408: Add `secp256k1_ec_pubkey_negate` and `secp256k1_ec_privkey_negate`
6af0871 Merge #441: secp256k1_context_randomize: document.
ab31a52 Merge #444: test: Use checked_alloc
eda5c1a Merge #449: Remove executable bit from secp256k1.c
51b77ae Remove executable bit from secp256k1.c
5eb030c test: Use checked_alloc
72d952c FIXUP: Missing "is"
70ff29b secp256k1_context_randomize: document.
9d560f9 Merge #428: Exhaustive recovery
8e48aa6 Add `secp256k1_ec_pubkey_negate` and `secp256k1_ec_privkey_negate`
2cee5fd exhaustive tests: add recovery module
678b0e5 exhaustive tests: remove erroneous comment from ecdsa_sig_sign
03ff8c2 group_impl.h: remove unused `secp256k1_ge_set_infinity` function
a724d72 configure: add --enable-coverage to set options for coverage analysis
b595163 recovery: add tests to cover API misusage
6f8ae2f ecdh: test NULL-checking of arguments
25e3cfb ecdsa_impl: replace scalar if-checks with VERIFY_CHECKs in ecdsa_sig_sign
git-subtree-dir: src/secp256k1
git-subtree-split: 84973d393ac240a90b2e1a6538c5368202bc2224
Rather than re-add disconnected block transactions back to the mempool
immediately, store them in a separate disconnectpool for later processing,
because we expect most such transactions to reappear in the chain that is
still to be connected (and thus we can avoid the work of reprocessing those
transactions through the mempool altogether).