This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
- the send coins context menu entry was not working anymore, because
a non current version of #2220 was merged onto current master
- also removes some unneeded spaces and adds a comment to
WalletModel::getNumTransactions()
Tabs don't fits in line in Spanish/German/Russian when they has two words.
Wallet has limited functionality. It can send & receive coins. So we can
safely rename "Send coins" to "Send" and "Receive coins" to "Receive".
Address book is just stored addresses.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
- this should prevent GUI issues on Mac that were observed before (disappearing
GUI - see #1522)
- the patch ensures, that createTrayIconMenu() is always called on Mac to
process and use our MacDockIconHandler
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
-dbcache was originally used to set the maximum buffer size in the
BDB environment, and was later changed to set the chainstate cache
and leveldb caches. No need to use it for BDB now that only the
wallet remains there.
This should reduce memory allocation (but not necessarily memory
usage) a bit.
Step for buttons 'up' and 'down' - 0.001. With BTC and mBTC all ok, but
0.001 uBTC is lower than minimal value (satoshi)
User should press 10 times on 'up' button to get 0.01 uBTC