SetString seems to be passing the length of the wrong variable to
memory_cleanse, resulting in the last byte of the temporary buffer not being
securely erased.
This will avoid sending more pointless INVs around updates, and
prevents using filter updates to timetag transactions.
Also adds locking for fRelayTxes.
By eliminating queued entries from the mempool response and responding only at
trickle time, this makes the mempool no longer leak transaction arrival order
information (as the mempool itself is also sorted)-- at least no more than
relay itself leaks it.
Previously we would assert that if every block in vBlockHashesToAnnounce is in
chainActive, then the blocks to be announced must connect. However, there are
edge cases where this assumption could be violated (eg using invalidateblock /
reconsiderblock), so just check for this case and revert to inv-announcement
instead.
Rather than allowing CNetAddr/CService/CSubNet to launch DNS queries, require
that addresses are already resolved.
This greatly simplifies async resolve logic, and makes it harder to
accidentally leak DNS queries.
Note: Some seeds aren't actually returning an IP for their name entries, so
they're being added to addrman with a source of [::].
This commit shouldn't change that behavior, for better or worse.
Previously Bitcoin would send 1/4 of transactions out to all peers
instantly. This causes high overhead because it makes >80% of
INVs size 1. Doing so harms privacy, because it limits the
amount of source obscurity a transaction can receive.
These randomized broadcasts also disobeyed transaction dependencies
and required use of the orphan pool. Because the orphan pool is
so small this leads to poor propagation for dependent transactions.
When the bypass wasn't in effect, transactions were sent in the
order they were received. This avoided creating orphans but
undermines privacy fairly significantly.
This commit:
Eliminates the bypass. The bypass is replaced by halving the
average delay for outbound peers.
Sorts candidate transactions for INV by their topological
depth then by their feerate (then hash); removing the
information leakage and providing priority service to
higher fee transactions.
Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.
leveldb's buildsystem causes us a few problems:
- breaks out-of-tree builds
- forces flags used for some tools
- limits cross builds
Rather than continuing to add wrappers around it, simply integrate it into our
build.
Remove the mistaken assumption that GetKey returning false signifies
an internal database issue. It will return false when the key cannot
be deserialized into the (char,uint256) stanza, which indicates
that the cursor has reached a different kind of key.
Fixes bug #7890 introduced in #7756.
Without the newline I see "bein" where the two lines are concatenated:
Note that all inputs selected must be of standard form and P2SH scripts must *bein* the wallet using importaddress or addmultisigaddress (to calculate fees).
swap was using an incorrect condition to determine when to apply an optimization
(not swapping the full direct[] when swapping two indirect prevectors).
Rather than correct the optimization I'm removing it for simplicity. Removing
this optimization minutely improves performance in the typical (currently only)
usage of member swap(), which is swapping with a freshly value-initialized
object.
Fixes a bug in which pop_back did not call the deleted item's destructor.
Using the most general erase() implementation to implement all the others
prevents similar bugs because the coupling between deallocation and destructor
invocation only needs to be maintained in one place.
Also reduces duplication of complex memmove logic.
The key (transaction id for the following outputs) should be serialized
to the HashWriter.
This is a problem as it means different transactions in the same
position with the same outputs will potentially result in the same hash.
Fixes primary concern of #7758.
Break the circular dependency between main and txdb by:
- Moving `CBlockFileInfo` from `main.h` to `chain.h`. I think this makes
sense, as the other block-file stuff is there too.
- Moving `CDiskTxPos` from `main.h` to `txdb.h`. This type seems
specific to txdb.
- Pass a functor `insertBlockIndex` to `LoadBlockIndexGuts`. This leaves
it up to the caller how to insert block indices.
Byte counts for SHA256, SHA512, SHA1 and RIPEMD160 must be 64 bits.
`size_t` has a different size per platform, causing divergent results
when hashing more than 4GB of data.
Add a method Cursor() to CCoinsView that returns a cursor which can be
used to iterate over the whole UTXO set.
- rpc: Change gettxoutsetinfo to use new Cursor method
- txdb: Remove GetStats method - Now that GetStats is implemented in
terms of Cursor, remove it.
We can't change "softforks", but it seems far more logical to use tags
in an object rather than using an "id" field in an array.
For example, to get the csv status before, you need to iterate the
array to find the entry with 'id' field equal to "csv":
jq '.bip9_softforks | map(select(.id == "csv"))[] | .status'
Now:
jq '.bip9_softforks.csv.status'
There is no issue with fork names being incompatible with JSON tags,
since we're selecting them ourselves.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previously we used the CInv that would be sent to the peer announcing the
transaction as the key, but using the txid instead allows us to decouple the
p2p layer from the application logic (which relies on this map to avoid
duplicate tx requests).