Doc in `doc/configuration.md` (has to be started sometime, no?).
Configuration function has to be lamely-named set_algo(), because
set_algorithm() is already declared in algorithm.h (prevent namespace
conflict).
algorithm has to be added as global variable due to the way the
callback is done (by CCAN/opt, which in itself is nice).
This can be cleaned up significantly by (at least) introducing a
global configuration struct, but there is no reason to do it now
just for this - better a wholesale manana.
moved enable_pool(), disable_pool() and other functions related to
state. These could probably be factored out altogether.
Pool state default is now "enabled" - it was previosly "disabled",
but there was an unconditional function call to enable all pools
in main() previously. It was factored out by joe's earlier commits,
so not visible in this one.
This setting allows to set the GPU intensity value directly without any modifiers, it does not
get any more raw than this! Look at the xintensity description raw for examples of regular
intensity values. You can also set this value through the ncurses interface by pressing:
G -> A -> select device id -> enter value.
Minor xintensity code cleanup as well.
Conflicts:
driver-opencl.c
miner.h
sgminer.c
All of this is credited to ArGee of RGMiner, he did the initial ground work for this setting.
This new setting allows for a much finer grained intensity setting and also opens up for dual gpu threads on devices not previously able to. Note: make sure to use lower thread-concurrency values when you increase cpu threads.
Intensity is currently used to spawn GPU threads as a simple 2^value setting.
I:13 = 8192 threads
I:15 = 32768 threads
I:17 = 131072 threads
I:18 = 262144 threads
I:19 = 524288 threads
I:20 = 1048576 threads
Notice how the higher settings increase thread count tremendously.
Now enter the xintensity setting (Yes, I am a genius with my naming convention!).
It is simply a shader multiplier, obviously based on the amount of shaders you got on a card, this should allow the same value to scale with different card models.
6970 with 1536 shaders: xI:64 = 98304 threads
R9 280X with 2048 shaders: xI:64 = 131072 threads
R9 290 with 2560 shaders: xI:64 = 180224 threads
R9 290X with 2816 shaders: xI:64 = 163840 threads
6970 with 1536 shaders: xI:300 = 460800 threads
R9 280X with 2048 shaders: xI:300 = 614400 threads
R9 290 with 2560 shaders: xI:300 = 768000 threads
R9 290X with 2816 shaders: xI:300 = 844800 threads
It's now much easier to control thread intensity and it potentially allows for a uniform way of setting the intensity on your system. I'm very interested in constructive feedback, as I do not have access to a lot of different card models.
This change has been tested on 6970, R9 290, R9 290X - all with equal or a little better speeds than regular intensity setting after a little tuning, but your mileage may vary. Don't fret it, if this doesn't work for you, the regular intensity setting is still available.
Conflicts:
driver-opencl.c
sgminer.c
The names will be used throughout the display in the program, when not set "Pool 1" will
simply be used instead. The names are not exposed through API yet, it's on my TODO list.
Use "poolname" like this:
{
"poolname" : "Example pool",
"url" : "stratum+tcp://example.com:8080",
"user" : "y",
"pass" : "x"
},
Conflicts:
sgminer.c
util.c
This may be useful in certain scenarios. However, server load from keepalive
is increased 6-fold if code is hard-changed from 30 to 5. So, provide it as
an option instead, and use the previous value as a default (30).
Explanation from
015c064396
Kevin's middlecoin fix, CURL TCP keepalive constants lowered:
CURLOPT_TCP_KEEPIDLE from 45 to 5 and CURLOPT_TCP_KEEPINTVL from 30 to
5. Before it'd trigger a keepalive packet after 45 seconds of connection
idle time and then again every 30 seconds. Now it triggers a keepalive
packet after 5 seconds of connection idle time and then again every 5
seconds.
It makes the client more resilient against coin switching pools or just
pools with connection issues in general. It will however add a tiny bit
pressure to the pool server; but a TCP keepalive probe is only about
60-80 bytes, so I don't think it is an issue.
This fixes a crash of the AMD driver when quitting as we were trying
to apply what basically is an uninitialized value.
Adds additional code to cope with a failure to retrieve just in case
we hit another problem like that in the future.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>