Efficiency is the percentage of getworks compared to the amount of
work requested from the pool. It can go higher than 100% in case
more shares than getworks were found. Some pools prefer miners to
have a high efficiency; CPU miners likely exhibit a low efficiency.
Utility is the number of shares found per minute, since the miner
was started. It is another way to describe the effectiveness of
a miner.
If there are no GPUs, set nDevs to 0 not -1 (status is set to an
unhelpful -1001 here on my laptop, so we can't rely on a particular
status value).
Also, if nDevs is -1, exit rather than screwing up later.
Currently it gets negated which means the default printed is wrong.
Use an explicit flag to tell if the user has overridden it; if they
haven't, and they turn off the GPUs, reset it to num_processors.
This cleans up option handling, by using ccan/opt rather than
handcoded getopt_long. We still have to open-code some things, such
as json config file handling.
The main change is that the --config option causes a file to be parsed
during commandline parsing, so you can override the results, and
provide multiple of them.
Other improvements are that 'help' and 'ndevs' are not valid arguments
in the config file; we use a separate argument table for such
commandline-only flags.
Disable signal handling and use many curl handles instead, thus making work more asynchronous.
Theoretically a curl can wait forever on a dns lookup with this but it's extremely unlikely.
Load the first queued extra work in the main function to avoid having a once-off variable in get_work().
Load an extra set of work for each function in advance once a longpoll is detected since every thread will need to get new work.
Discard requests with a separate function to ensure the right number is always queued.
Make it possible to use the thread id for getting work again.
Flag the getwork() function when we have a new block to explicitly discard any cached work when a new block is detected.
Store the header of each new work and compare it to blocks we're about to submit to decide if they're stale due to a new block and don't try to submit them.
This should significantly decrease the number of rejected blocks.