The score index is meant to represent the order of priority for being included in a block for miners. Initially this is set to the transactions modified (by any feeDelta) fee rate. Index improvements and unit tests by sdaftuar.
Store sum of legacy and P2SH sig op counts. This is calculated in AcceptToMemory pool and storing it saves redoing the expensive calculation in block template creation.
Compute the value of inputs that already are in the chain at time of mempool entry and only increase priority due to aging for those inputs. This effectively changes the CTxMemPoolEntry's GetPriority calculation from an upper bound to a lower bound.
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.
After each transaction which is added to mempool, we first call
Expire() to remove old transactions, then throwing away the
lowest-feerate transactions.
After throwing away transactions by feerate, we set the minimum
relay fee to the maximum fee transaction-and-dependant-set we
removed, plus the default minimum relay fee.
After the next block is received, the minimum relay fee is allowed
to decrease exponentially. Its halflife defaults to 12 hours, but
is decreased to 6 hours if the mempool is smaller than half its
maximum size, and 3 hours if the mempool is smaller than a quarter
its maximum size.
The minimum -maxmempool size is 40*-limitdescendantsize, as it is
easy for an attacker to play games with the cheapest
-limitdescendantsize transactions. -maxmempool defaults to 300MB.
This disables high-priority transaction relay when the min relay
fee adjustment is >0 (ie when the mempool is full). When the relay
fee adjustment drops below the default minimum relay fee / 2 it is
set to 0 (re-enabling priority-based free relay).
(note the 9x multiplier on (void*)'s for CTxMemPool::DynamicMemoryUsage
was accidentally introduced in 5add7a7 but should have waited for this
commit which adds the extra index)
CalculateMemPoolAncestors was always looping over a transaction's inputs
to find in-mempool parents. When adding a new transaction, this is the
correct behavior, but when removing a transaction, we want to use the
ancestor set that would be calculated by walking mapLinks (which should
in general be the same set, except during a reorg when the mempool is
in an inconsistent state, and the mapLinks-based calculation would be the
correct one).
Associate with each CTxMemPoolEntry all the size/fees of descendant
mempool transactions. Sort mempool by max(feerate of entry, feerate
of descendants). Update statistics on-the-fly as transactions enter
or leave the mempool.
Also add ancestor and descendant limiting, so that transactions can
be rejected if the number or size of unconfirmed ancestors exceeds
a target, or if adding a transaction would cause some other mempool
entry to have too many (or too large) a set of unconfirmed in-
mempool descendants.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.