It cannot be used as-is. It should probably be implemented as a
standalone "fake pool" application anyway, to properly gauge the effect
of queue/scantime/expiry and network latency.
If someone really needs this, they can revert this commit.
Closes#116.
This is trivial and shouldn't be so hard: it required modifying
functions in both sgminer.c (for NCurses stuff) and api.c. There is
much code repetition, since the NCurses interface is hard-coded in.
Removing it would simplify things greatly.
It looks like there's an exploit that abuses said command, but it is still not clear exactly how.
There's also an additional message when the reconnect happens: "WARNING: POTENTIAL CLIENT.EXPLOIT!", but it requires you to be actively monitoring your log to catch it, and in which case you already get a "Reconnect requested from Pool 0 to 127.0.0.1" message.
Note that disabling 'client.reconnect' might affect some pools that rely on the feature, like pools that you lease your rig to.
Oh and this is dry-coded. :)
WIP!
Use a string instead of a state-machine-ey kernel selection mechanism
where kernel names have to be predefined. This should allow just dropping
new kernels into dir `kernel` without bloating the code in three other
places.
Is in dire need of a cleanup, function parameter check, edge case check -
all the usual testing.
In particular, checking these definitions/keywords:
* enum cl_kernels
* kname
* [c]gpu[s]->kernel (and similar)
* memory cleanup after strdup()?..
* chosen_kernel
* queue_scrypt_kernel
* strbuf
* initCl
Also squashed:
config: add log messages to set_algo() and set_nfactor().
algorithm: use set_algorithm_nfactor() when setting default nfactor in set_algorithm().
Otherwise algorithm->n defaults to 0.
P.S. Did I already mention how this could have been C++?..
Doc in `doc/configuration.md` (has to be started sometime, no?).
Configuration function has to be lamely-named set_algo(), because
set_algorithm() is already declared in algorithm.h (prevent namespace
conflict).
algorithm has to be added as global variable due to the way the
callback is done (by CCAN/opt, which in itself is nice).
This can be cleaned up significantly by (at least) introducing a
global configuration struct, but there is no reason to do it now
just for this - better a wholesale manana.
moved enable_pool(), disable_pool() and other functions related to
state. These could probably be factored out altogether.
Pool state default is now "enabled" - it was previosly "disabled",
but there was an unconditional function call to enable all pools
in main() previously. It was factored out by joe's earlier commits,
so not visible in this one.
This setting allows to set the GPU intensity value directly without any modifiers, it does not
get any more raw than this! Look at the xintensity description raw for examples of regular
intensity values. You can also set this value through the ncurses interface by pressing:
G -> A -> select device id -> enter value.
Minor xintensity code cleanup as well.
Conflicts:
driver-opencl.c
miner.h
sgminer.c
All of this is credited to ArGee of RGMiner, he did the initial ground work for this setting.
This new setting allows for a much finer grained intensity setting and also opens up for dual gpu threads on devices not previously able to. Note: make sure to use lower thread-concurrency values when you increase cpu threads.
Intensity is currently used to spawn GPU threads as a simple 2^value setting.
I:13 = 8192 threads
I:15 = 32768 threads
I:17 = 131072 threads
I:18 = 262144 threads
I:19 = 524288 threads
I:20 = 1048576 threads
Notice how the higher settings increase thread count tremendously.
Now enter the xintensity setting (Yes, I am a genius with my naming convention!).
It is simply a shader multiplier, obviously based on the amount of shaders you got on a card, this should allow the same value to scale with different card models.
6970 with 1536 shaders: xI:64 = 98304 threads
R9 280X with 2048 shaders: xI:64 = 131072 threads
R9 290 with 2560 shaders: xI:64 = 180224 threads
R9 290X with 2816 shaders: xI:64 = 163840 threads
6970 with 1536 shaders: xI:300 = 460800 threads
R9 280X with 2048 shaders: xI:300 = 614400 threads
R9 290 with 2560 shaders: xI:300 = 768000 threads
R9 290X with 2816 shaders: xI:300 = 844800 threads
It's now much easier to control thread intensity and it potentially allows for a uniform way of setting the intensity on your system. I'm very interested in constructive feedback, as I do not have access to a lot of different card models.
This change has been tested on 6970, R9 290, R9 290X - all with equal or a little better speeds than regular intensity setting after a little tuning, but your mileage may vary. Don't fret it, if this doesn't work for you, the regular intensity setting is still available.
Conflicts:
driver-opencl.c
sgminer.c
The names will be used throughout the display in the program, when not set "Pool 1" will
simply be used instead. The names are not exposed through API yet, it's on my TODO list.
Use "poolname" like this:
{
"poolname" : "Example pool",
"url" : "stratum+tcp://example.com:8080",
"user" : "y",
"pass" : "x"
},
Conflicts:
sgminer.c
util.c