Small echo rewrite. +10KHASH on the 650(compute 3.0)
tpruvot: add Linux Makefile - Force to 80 registers (else -30KH/s)
Note : the hashrate seems more constant with this change
Was maybe my fault, but the benchmark mode was
always recomputing from nonce 0.
Also fix blake if -d 1 is used (one thread but second gpu)
stats: do not use thread id as key, prefer gpu id...
nvml.dll doesnt exists for 32bit binaries! use nvapi to get infos
seems to have more/different features than NVML... like pstate etc..
This is nvapi r343 : https://developer.nvidia.com/nvapi
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
There was a different behavior on linux and visual studio
That was making it hard to link functions correctly
That remove some ifdef / extern "C" requirements
note about x86 releases, x86 nvml.dll is not installed on Windows x64!
Based on mwhite73 <marvin.white@gmail.com> implementation
Linked to the api system
Also fix Makefile to support standard c++ files
This prevent nvcc use without device code
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Displayed data is the average of the last 50 scans in the 5 last minutes
Also move cuda common functions in a new file (cuda.cu)
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
curl built from tpruvot/curl-for-windows project with the HTTP_ONLY define
This project doesnt require SSH, LDAP and all the internel protocols ;)
Remove 200KB to the final binaries
based on klaus commits, will increase a bit speed of most algos
PS: main increase is due to the register count tuning in Makefile
and for skein512 on linux, its the ROTL64
but almost no changes on X11 : 2648MH/s vs 2630 before
Blake256: squashed commit...
Squashed commit of the following:
commit c370208bc92ef16557f66e5391faf2b1ad47726f
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 13:53:01 2014 +0200
hashlog: prepare store of scanned range
commit e2cf49a5e956f03deafd266d1a0dd087a2041c99
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 12:54:13 2014 +0200
stratum: store server time offset in context
commit 1a4391d7ff
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 12:40:52 2014 +0200
hashlog: prevent double computing on jobs already done
commit 049e577301
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 09:49:14 2014 +0200
tmp blake log
commit 43d3e93e1a
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Wed Sep 3 09:29:51 2014 +0200
blake: set a max throughput
commit 7e595a36ea
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 21:13:37 2014 +0200
blake: cleanup, remove d_hash buf, not in a chain
host: only bencode if gpu hash was found
commit de80c7e9d1
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 12:40:44 2014 +0200
blake: remove unused parameter and fix index in d_hash
that reduce the speed to 92MH/s but the next commit
give us 30 more
so, todo: merge the whole checkhash proc in gpu_hash
and remove this d_hash buffer...
commit 2d42ae6de5
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 05:09:31 2014 +0200
stratum: handle a small cache of submitted jobs
Prevent to send duplicated shares on some pools like hashharder..
This cache keeps submitted job/nounces of the last 15 minutes
so, remove exit on repeated duplicate shares,
the submitted cache now handles this problem.
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
commit 1b8c3c12fa
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Tue Sep 2 03:38:57 2014 +0200
debug: a new boolean to log or not json rpc data
commit 1f99aae0ff
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 18:49:23 2014 +0200
exit on repeated duplicate shares (to enhance)
create a new function proper_exit() to do common stuff on exit...
commit 530732458a
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 12:22:51 2014 +0200
blake: use a constant for threads, reduce mallocated d_hash size
and clean a bit more...
commit 0aeac878ef
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 06:12:55 2014 +0200
blake: tune up and cleanup, ~100 MH/s on a normal 750Ti
tested on linux and windows (x86 binary)...
but there is a high number of duplicated shares... weird
commit 4a52d0553b
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 10:22:32 2014 +0200
debug: show json methods, hide hash/target if ok
commit 1fb9becc1f
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Sep 1 08:44:19 2014 +0200
cpu-miner: sort algos by name, show reject reason
commit bfe96c49b0
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Mon Aug 25 11:21:06 2014 +0200
release 1.4, update README...
commit c17d11e377
Author: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Date: Sun Aug 31 08:57:48 2014 +0200
add "blake" 256, 14 rounds (for NEOS blake, not BlakeCoin)
also remove "missing" file, its old and not compatible with ubuntu 14.04
to test on windows
blake: clean and optimize
Release v1.4 with blake (NEOS)
Prevent to send duplicated shares on some pools like hashharder..
This cache keeps submitted job/nounces of the last 15 minutes
so, remove exit on repeated duplicate shares,
the submitted cache now handles this problem.
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Cleaned up and adapted to my changes (cputest added)
Remove Makefile.in which should be in gitignore
(Plz refresh it with ./config.sh to compile on linux)
Project was updated for VS2013 and CUDA SDK 6.5
add also a --cputest function to dump cpu hash results
TODO: x15 is not fully functional, but first loop seems ok
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>