Since linux driver 346.72, nvidia-smi allow to query gpu/mem clocks
Tested ok on the Asus Strix 970, but fails on the Gigabyte 750 Ti
system could require first persistence mode and app clock unlock :
nvidia-smi -pm 1
nvidia-smi -acp 0
supported values are displayed by
nvidia-smi -q -d SUPPORTED_CLOCKS
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
only allowed if --api-remote parameter or config key is set
and fix possible problem with urls containing user:password@
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
heavy: reduce by 256 threads default intensity to all -i 20
cuda: put static thread init bools outside the code (made once)
api: fix nvml header to build without
There was a different behavior on linux and visual studio
That was making it hard to link functions correctly
That remove some ifdef / extern "C" requirements
note about x86 releases, x86 nvml.dll is not installed on Windows x64!
Based on mwhite73 <marvin.white@gmail.com> implementation
Linked to the api system
Also fix Makefile to support standard c++ files
This prevent nvcc use without device code
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Unlike other hash algos, blake256 compute the hash
with blocks of 64 bytes.
We can do the first part on the cpu, only the 4 last int32
are computed on gpu (including the tested nonce)
Previous method was also using this kind of cache with a crc.
Blake Hash Speed: +5%
based on klaus commits, will increase a bit speed of most algos
PS: main increase is due to the register count tuning in Makefile
and for skein512 on linux, its the ROTL64
but almost no changes on X11 : 2648MH/s vs 2630 before
Indent, and put commonly used functions proto. in cuda_helper.h
And add them to --cputest function
Also change the color option to --nocolor, -C is no more needed
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
(Which is tired to remove these german copy/pasted comments)
Blake256: squashed commit...
Squashed commit of the following:
commit c370208bc92ef16557f66e5391faf2b1ad47726f
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 13:53:01 2014 +0200
hashlog: prepare store of scanned range
commit e2cf49a5e956f03deafd266d1a0dd087a2041c99
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 12:54:13 2014 +0200
stratum: store server time offset in context
commit 1a4391d7ff
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 12:40:52 2014 +0200
hashlog: prevent double computing on jobs already done
commit 049e577301
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 09:49:14 2014 +0200
tmp blake log
commit 43d3e93e1a
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 09:29:51 2014 +0200
blake: set a max throughput
commit 7e595a36ea
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 21:13:37 2014 +0200
blake: cleanup, remove d_hash buf, not in a chain
host: only bencode if gpu hash was found
commit de80c7e9d1
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 12:40:44 2014 +0200
blake: remove unused parameter and fix index in d_hash
that reduce the speed to 92MH/s but the next commit
give us 30 more
so, todo: merge the whole checkhash proc in gpu_hash
and remove this d_hash buffer...
commit 2d42ae6de5
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 05:09:31 2014 +0200
stratum: handle a small cache of submitted jobs
Prevent to send duplicated shares on some pools like hashharder..
This cache keeps submitted job/nounces of the last 15 minutes
so, remove exit on repeated duplicate shares,
the submitted cache now handles this problem.
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
commit 1b8c3c12fa
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 03:38:57 2014 +0200
debug: a new boolean to log or not json rpc data
commit 1f99aae0ff
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 18:49:23 2014 +0200
exit on repeated duplicate shares (to enhance)
create a new function proper_exit() to do common stuff on exit...
commit 530732458a
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 12:22:51 2014 +0200
blake: use a constant for threads, reduce mallocated d_hash size
and clean a bit more...
commit 0aeac878ef
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 06:12:55 2014 +0200
blake: tune up and cleanup, ~100 MH/s on a normal 750Ti
tested on linux and windows (x86 binary)...
but there is a high number of duplicated shares... weird
commit 4a52d0553b
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 10:22:32 2014 +0200
debug: show json methods, hide hash/target if ok
commit 1fb9becc1f
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 08:44:19 2014 +0200
cpu-miner: sort algos by name, show reject reason
commit bfe96c49b0
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Aug 25 11:21:06 2014 +0200
release 1.4, update README...
commit c17d11e377
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Sun Aug 31 08:57:48 2014 +0200
add "blake" 256, 14 rounds (for NEOS blake, not BlakeCoin)
also remove "missing" file, its old and not compatible with ubuntu 14.04
to test on windows
blake: clean and optimize
Release v1.4 with blake (NEOS)