Instead of building a full copy of a CTransaction being signed, and
then modifying bits and pieces until its fits the form necessary
for computing the signature hash, use a wrapper serializer that
only serializes the necessary bits on-the-fly.
This makes it easier to see which data is actually being hash,
reduces load on the heap, and also marginally improves performances
(around 3-4us/sigcheck here). The performance improvements are much
larger for large transactions, though.
The old implementation of SignatureHash is moved to a unit tests,
to test whether the old and new algorithm result in the same value
for randomly-constructed transactions.
If BDB_CPPFLAGS returns only "-I", the next argument sent to the preprocessor
is treated as a path. There are 2 fixes here:
1. Check in CPPFLAGS, as a user might have manually passed a path to check.
2. Ensure the value is not empty before setting BDB_CPPFLAGS to "-I value"
This changes the priority calculation to not include the size of per-txin
data including up to 110 bytes of scriptsig so that transactions which
sweep up extra UTXO don't lose priority relative to ones that don't.
I'd toyed with some other variations, but it seems like any formulation
which results in an incentive stronger than making them not count will
sometimes create incentives to add extra outputs so that you have
extra inputs to consume later. The maximum credit is limited so that
users don't lose the disincentive to stuff random data in their
transactions, the limit of 110 is based on the size of a P2SH
redemption with a compressed public key.
This shouldn't need a staged deployment because the priority is not
used as a relay criteria, only a mining criteria.
This change moves test data into the binaries rather than reading them from
the disk at runtime.
Advantages:
- Tests become distributable
- Cross-compile friendly. Build on one machine and execute in an arbitrary
location on another.
- Easier testing for backports. Users can verify that tests pass without having
to track down corresponding test data.
- More trustworthy test results and easier quality assurance as tests make
fewer assumptions about their environment.
- Tests could theoretically run at client/daemon startup and exit on failure.
Disadvantages:
- Required 'hexdump' build-dependency. This is a standard bsd tool that should
be usable everywhere. It is likely already installed on all build-machines.
- Tests can no longer be fudged after build by altering test-data.
libleveldb.a and libmemenv.a should be able to build in parallel, but in
practice calling the leveldb makefile ends up rewriting build_config.mk. If
one target tries to build while the other is halfway through writing the
.mk, the make ends up in an undefined state.
Fix that by making one depend on the other. This also reorders the variables
to be passed by param rather than via the environment, and combines the targets
into a single rule to avoid needless duplication.
As we'd previously learned, OSX's fsync is a data eating lie.
Since 0.8.4 we're still getting some reports of disk corruption on
OSX but now all of it looks like the block files have gotten out of
sync with the database. It turns out that we were still using fsync()
on the block files, so this isn't surprising.
- ensure message boxes are shown in center of our main window, not
centered on the users desktop
- always prefer user supplied titles for message boxes over the functions
defaults (fixes a bug, where transaction info messages did not contain
information, if it was incoming or outgoing)
- rename URL into URI in paymentserver where correct
- add some missing Qt-coding-stuff in paymentserver
- change QSpinBox to QLineEdit as base for BitcoinAmountField in .ui files
(as this is the result when converting the BAF back into base)
- remove some c_str() and replace with QString::fromStdString()
- remove several new-lines
- remove unneeded spaces
- indentation fixes
This also makes negative transaction versions non-standard.
This avoids an issue triggered in block 256818 where transactions with
negative version numbers were incorrectly serialized into the UTXO set.
On restart nodes detect the inconsistency and refuse to start so long as
a block with these transactions is inside the self-consistency check
window, logging "coin database inconsistencies found". The software
recommends reindexing, but reindexing does not correct the problem.
This should be fixed by changing the chainstate serialization, but
working around it seems harmless for now because the version is not
used by any network rule currently.
A patch free workaround is to start with -checklevel=2 which skips
the consistency checks, but the IsStandard change is important for
miners in order to protect unpatched nodes.
I've seen users confused multiple times thinking they
should be using -tor to set their tor proxy and then
finding in horror that they were still connecting to
the IPv4 internet.
Even Jeff guesses wrong about what the knob does, so
I think we should rename it. This leaves the old
knob working, we can pull it out completely in a
later release.
- prepend "Bitcoin-Qt" in front of debug.log entries, which come from Qt
- move DebugMessageHandler installation upwards to the event handler
installation, which fits much better
Correctly use the purpose of addresses that are added after the start
of the client. Addresses with purpose "refund" and "change" should not
be visible in the GUI. This is now handled correctly.
- extend PaymentServer with setOptionsModel() and rework initNetManager()
to make use of that
- fix all other places in the code to use display unit from options and no
hard-coded unit
There have been several incidents where mainnet experimentation with
raw transactions resulted in insane fees. This is hard to prevent
in the raw transaction api because the inputs may not be known.
Since sending doesn't work if the inputs aren't known, we can catch
it there.
This rejects fees > than 10000 * nMinRelayTxFee or 1 BTC with the
defaults and can be overridden with a bool at the rpc.
We're not seeing large reorgs that would justify waiting a large
amount past the rule required maturity, and the extra three
hours is just a nuisance. Take one more block to at least give
the 100th block time to propagate.