This resolves an issue where estimatesmartfee would return 999
sat/byte instead of 1000, due to floating point loss of precision
Thanks to sipa for suggesting is_integral.
No sensible user will ever keep the default settings here, so not
having sensible defaults only serves to screw users who are
paying less attention, which makes for terrible defaults.
* This removes block-size-limiting code in favor of GBT clients
doing the limiting themselves (if at all).
* -blockmaxsize is deprecated and only used to calculate an implied
blockmaxweight, addressing confusion from multiple users.
* getmininginfo's currentblocksize return value was returning
garbage values, and has been removed, also removing a
GetSerializeSize call in some block generation inner loops and
potentially addressing some performance edge cases.
CFeeRate and CTxMemPoolEntry have explicitly defined copy ctors which has the same functionality as the implicit default copy ctors which would have been generated otherwise.
Besides being redundant, it violates the rule of three (see https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming) ).
(Of course, the rule of three doesn't -really- cause a resource management issue here, but the reason for that is exactly that there is no need for an explicit copy ctor in the first place since no resources are being managed).
CFeeRate has an explicitly defined copy ctor which has the same functionality as the implicit default copy ctor which would h
ave been generated otherwise.
Change parameter for conservative estimates to be an estimate_mode string.
Change to never return a -1 for failure but to instead omit the feerate and
return an error string. Throw JSONRPC error on invalid nblocks parameter.
This redefines dust to be the value of an output such that it would
cost that value in fees to (create and) spend the output at the dust
relay rate. The previous definition was that it would cost 1/3 of the
value. The default dust relay rate is correspondingly increased to
3000 sat/kB so the actual default dust output value of 546 satoshis
for a non-segwit output remains unchanged. This commit is a refactor
only unless a dustrelayfee is passed on the commandline in which case
that number now needs to be increased by a factor of 3 to get the same
behavior. -dustrelayfee is a hidden command line option.
Note: It's not exactly a refactor due to edge case changes in rounding
as evidenced by the required change to the unit test.
This check has been moved to the wallet logic GetMinimumFee. The rpc call to
estimatesmartfee will now no longer return a result maxed with the mempool min
fee, but automated fee calculations from the wallet will produce the same result
as before and coincontrol and sendcoins dialogs in the GUI will correctly
display the right prospective fee.
changes to policy/fees.cpp include a big whitespace indentation change.
Some people keep thinking that MAX_BLOCK_BASE_SIZE is a separate
size limit from the weight limit when it fact it is superfluous,
and used in early tests before the witness data has been
validated or just to compute worst case sizes. The size checks
that use it would not behave any differently consensus wise
if they were eliminated completely.
Its correct value is not independently settable but is a function
of the weight limit and weight formula.
This patch just eliminates it and uses the scale factor as
required to compute the worse case constants.
It also moves the weight factor out of primitives into consensus,
which is a more logical place for it.
Add support for setting each of these attributes on a per RPC call basis to sendtoaddress, sendmany, fundrawtransaction (already had RBF), and bumpfee (already had RBF and conf target).
GetMinimumFee now passes the conservative argument into estimateSmartFee.
Call CalculateEstimateType(mode) before calling GetMinimumFee or estimateSmartFee to determine the value of this argument.
CCoinControl can now be used to control this mode.
A few "a->an" and "an->a".
"Shows, if the supplied default SOCKS5 proxy" -> "Shows if the supplied default SOCKS5 proxy". Change made on 3 occurrences.
"without fully understanding the ramification of a command" -> "without fully understanding the ramifications of a command".
Removed duplicate words such as "the the".
```
$ git blame src/policy/fees.cpp | grep becuase
3810e976 (2017-03-07 11:33:44 -0500 789) * checks for 2*target becuase we are taking the max over all time
$ git blame src/policy/fees.h | grep successfullly
2d2e1705 (2017-04-12 12:29:03 -0400 54) * representing that a tx was successfullly confirmed in less than or equal to
$ git blame src/wallet/feebumper.cpp | grep "hasen't"
a3878374 (2017-05-11 09:34:39 +0200 258) // make sure the transaction still has no descendants and hasen't been mined in the meantime
```
For the per confirmation number tracking of data, introduce a scale factor so that in the longer horizones confirmations are bucketed together at a resolution of the scale. (instead of 1008 individual data points for each fee bucket, have 42 data points each covering 24 different confirmation values.. (1-24), (25-48), etc.. )
Store in fee estimate file the block span for which we were tracking estimates, so we know what targets we can successfully evaluate with the data in the file. When restarting use either this historical block span to set valid range of targets until our current span of tracking estimates is just as long.
Track the first time we seen txs in a block that we have been tracking in our mempool. Used to evaluate validity of fee estimates for different targets.
Change the logic of estimateSmartFee to check a 60% threshold at half the target, a 85% threshold at the target and a 95% threshold at double the target. Always check the shortest time horizon possible and ensure that estimates are monotonically decreasing. Add a conservative mode, which makes sure that the 95% threshold is also met at longer time horizons as well.
Track information the ranges of fee rates that were used to calculate the fee estimates (the last range of fee rates in which the data points met the threshold and the first to fail) and provide an RPC call to return this information.
Instead of stopping if it encounters a "sufficient" number of transactions which don't meet the threshold for being confirmed within the target, it keeps looking to add more transactions to see if there is a temporary blip in the data. This allows a smaller number of required data points.