1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Step for buttons 'up' and 'down' - 0.001. With BTC and mBTC all ok, but
0.001 uBTC is lower than minimal value (satoshi)
User should press 10 times on 'up' button to get 0.01 uBTC
- allows to directly select an address from the addressbook, chose "send
coins" from the context menu, which sends you to sendcoins tab and fills
in the selected address
- try to enforce the same style to all Qt related files
- remove unneeded includes from the files
- add missing Q_OBJECT, QT_BEGIN_NAMESPACE / QT_END_NAMESPACE
- prepares for a pull-req to include Qt5 compatibility
- remove an unneeded MODAL flag, as MSG_ERROR sets MODAL
- re-order an if-clause in main to have bool checks before a function call
- fix some log messages that used wrong function names
- make a log message use a correct ellipsis
- remove some unneded spaces, brackets and line-breaks
- fix style for adding files in the Qt project
This fixes test_bitcoin failures on openbsd reported by dhill on IRC.
On some systems rand() is a simple LCG over 2^31 and so it produces
an even-odd sequence. ApproximateBestSubset was only using the least
significant bit and so every run of the iterative solver would be the
same for some inputs, resulting in some pretty dumb decisions.
Using something other than the least significant bit would paper over
the issue but who knows what other way a system's rand() might get us
here. Instead we use an internal RNG with a period of something like
2^60 which is well behaved. This also makes it possible to make the
selection deterministic for the tests, if we wanted to implement that.
Switch to using Qt's QLocalServer/QLocalSocket to handle bitcoin
payment links (bitcoin:... URIs)
Reason for switch: the boost::interprocess mechanism seemed flaky,
and doesn't mesh as well with "The Qt Way"
qtipcserver.cpp/h is replaced by paymentserver.cpp/h
Click-to-pay now also works on OSX, with a custom Info.plist
that registers Bitcoin-Qt as a handler for bitcoin: URLs and
an event listener on the main QApplication that handles
QFileOpenEvents (Qt translates 'url clicked' AppleEvents into
QFileOpenEvents automagically).
Extremely large transactions with lots of inputs can cost the network
almost as much to process as they cost the sender in fees.
We would never create transactions larger than 100K big; this change
makes transactions larger than 100K non-standard, so they are not
relayed/mined by default. This is most important for miners that might
create blocks larger than 250K big, who could be vulnerable to a
make-your-blocks-so-expensive-to-verify-they-get-orphaned attack.