Update the unittest that is meant to catch a transaction that is invalid
because it has a null input. The old test failed not because of that
but because it was considered a coinbase with too large script. This is
already checked with a different test, though.
The new test is *not* a coinbase since it has two inputs, but one of
them is null. This really checks the corresponding code path in
CheckTransaction.
Transactions are not allowed in the memory pool or selected for inclusion in a block until their lock times exceed chainActive.Tip()->GetMedianTimePast(). However blocks including transactions which are only mature under the old rules are still accepted; this is *not* the soft-fork required to actually rely on the new constraint in production.
Adds an `obfuscate` parameter to `CLevelDBWrapper` and makes use of it
for all new chainstate stores built via `CCoinsViewDB`. Also adds an
`Xor` method to `CDataStream`.
Thanks to @sipa@laanwj@pstratem@dexX7@KyrosKrane@gmaxwell.
Previously only one PUSHDATA was allowed, needlessly limiting
applications such as matching OP_RETURN contents with bloom filters that
operate on a per-PUSHDATA level. Now any combination that passes
IsPushOnly() is allowed, so long as the total size of the scriptPubKey
is less than 42 bytes. (unchanged modulo non-minimal PUSHDATA encodings)
Also, this fixes the odd bug where previously the PUSHDATA could be
replaced by any single opcode, even sigops consuming opcodes such as
CHECKMULTISIG. (20 sigops!)
Assume that when a wallet transaction has a valid block hash and transaction position
in it, the transaction is actually there. We're already trusting wallet data in a
much more fundamental way anyway.
To prevent backward compatibility issues, a new record is used for storing the
block locator in the wallet. Old wallets will see a wallet file synchronized up
to the genesis block, and rescan automatically.
Associate with each CTxMemPoolEntry all the size/fees of descendant
mempool transactions. Sort mempool by max(feerate of entry, feerate
of descendants). Update statistics on-the-fly as transactions enter
or leave the mempool.
Also add ancestor and descendant limiting, so that transactions can
be rejected if the number or size of unconfirmed ancestors exceeds
a target, or if adding a transaction would cause some other mempool
entry to have too many (or too large) a set of unconfirmed in-
mempool descendants.
Fix https://github.com/bitcoin/bitcoin/issues/6558. In particular, stop
parsing JSON after the first object or array is finished. Check that no
other garbage follows, and fail the parser if it does.
These changes decode valid SIGHASH types on signatures in assembly (asm) representations of scriptSig scripts.
This squashed commit incorporates substantial helpful feedback from jtimon, laanwj, and sipa.
While CBloomFilter is usually used with an explicitly set nTweak,
CRollingBloomFilter is only used internally. Requiring every caller to
set nTweak is error-prone and redundant; better to have the class handle
that for you with a high-quality randomness source.
Additionally when clearing the filter it makes sense to change nTweak as
well to recover from a bad setting, e.g. due to insufficient randomness
at initialization, so the clear() method is replaced by a reset() method
that sets a new, random, nTweak value.
As suggested by Greg Maxwell-- unit test to make sure a block
with a double-spend in it doesn't pass validation if half of
the double-spend is already in the memory pool (so full-blown
transaction validation is skipped) when the block is received.
`bitcoind -X -noX` ends up, unintuitively, with `X` set.
(for all boolean options X)
This result is due to the odd two-pass processing of arguments. This
patch fixes this oddity and simplifies the code at the same time.