Con Kolivas
|
fd55fab96a
|
Make bitforce nonce range support a command line option --bfl-range since enabling it decrease hashrate by 1%.
|
13 years ago |
Con Kolivas
|
75eca07823
|
Restart_wait is only called with a ms value so incorporate that into the function.
|
13 years ago |
Con Kolivas
|
8bc7d1c9a0
|
Only try to adjust dev width when curses is built in.
|
13 years ago |
Con Kolivas
|
67e92de18c
|
Adjust device width column to be consistent.
|
13 years ago |
Con Kolivas
|
ce93c2fc62
|
Use cgpu-> not gpus[] in watchdog thread.
|
13 years ago |
Con Kolivas
|
610cf0f0a5
|
Minor style changes.
|
13 years ago |
Sergei Krivonos
|
aaa9f62b3e
|
Made JSON error message verbose.
|
13 years ago |
ckolivas
|
ac45260e18
|
Random style cleanups.
|
13 years ago |
ckolivas
|
06ec47b3bd
|
Must always unlock mutex after cond timedwait.
|
13 years ago |
ckolivas
|
df5d196f9a
|
Must unlock mutex if pthread_cond_wait succeeds.
|
13 years ago |
ckolivas
|
fd7b21ed56
|
Use a pthread conditional that is broadcast whenever work restarts are required. Create a generic wait function waiting a specified time on that conditional that returns if the condition is met or a specified time passed to it has elapsed. Use this to do smarter polling in bitforce to abort work, queue more work, and check for results to minimise time spent working needlessly.
|
13 years ago |
ckolivas
|
830f2902b9
|
Numerous style police clean ups in cgminer.c
|
13 years ago |
ckolivas
|
1e9421475c
|
Timersub is supported on all build platforms so do away with custom timerval_subtract function.
|
13 years ago |
Paul Sheppard
|
efaa7398fb
|
Tweak sick/dead logic
(remove pre-computed time calculations)
|
13 years ago |
Paul Sheppard
|
86c8bbe57e
|
Need to run Hashmeter all the time.
and not just if logging/display is enabled
|
13 years ago |
Paul Sheppard
|
75a651c13f
|
Revert "Check for submit_stale before checking for work_restart"
Makes no sense to continue working on the old block whether submit_stale is enabled or not.
|
13 years ago |
Paul Sheppard
|
f225392990
|
Add low hash threshold in sick/dead processing
Add check for fd in comms procedures
|
13 years ago |
Con Kolivas
|
3267b534a8
|
Implement rudimentary X-Mining-Hashrate support.
|
13 years ago |
Con Kolivas
|
24316fc7fc
|
Revert "Work is checked if it's stale elsewhere outside of can_roll so there is no need to check it again."
This reverts commit 5ad58f9a5c .
|
13 years ago |
Con Kolivas
|
5ad58f9a5c
|
Work is checked if it's stale elsewhere outside of can_roll so there is no need to check it again.
|
13 years ago |
Con Kolivas
|
eddd02fea1
|
Put upper bounds to under 2 hours that work can be rolled into the future for bitcoind will deem it invalid beyond that.
|
13 years ago |
Con Kolivas
|
bcec5f5102
|
Revert "Check we don't exhaust the entire unsigned 32 bit ntime range when rolling time to cope with extremely high hashrates."
This reverts commit 522f620c89 .
Unrealistic. Limits are bitcoind related to 2 hours in the future.
|
13 years ago |
Con Kolivas
|
522f620c89
|
Check we don't exhaust the entire unsigned 32 bit ntime range when rolling time to cope with extremely high hashrates.
|
13 years ago |
Kano
|
c21fc06560
|
define API option --api-groups
|
13 years ago |
ckolivas
|
21a23a45d7
|
Work around pools that advertise very low expire= time inappropriately as this leads to many false positives for stale shares detected.
|
13 years ago |
Paul Sheppard
|
d3e2b62c54
|
Change sick/dead processing to use device pointer, not gpu array.
Change BFL timing to adjust only when hashing complete (not error/idle etc.).
|
13 years ago |
Con Kolivas
|
68a3a9ad10
|
There is no need for work to be a union in struct workio_cmd
|
13 years ago |
ckolivas
|
b198423d17
|
Don't keep rolling work right up to the expire= cut off. Use 2/3 of the time between the scantime and the expiry as cutoff for reusing work.
|
13 years ago |
ckolivas
|
6e80b63bb8
|
Revert "Increase the getwork delay factored in to determine if work vs share is stale to avoid too tight timing."
This reverts commit d8de1bbc5b .
Wrong fix.
|
13 years ago |
ckolivas
|
d8de1bbc5b
|
Increase the getwork delay factored in to determine if work vs share is stale to avoid too tight timing.
|
13 years ago |
Paul Sheppard
|
1ef52e0bac
|
Check for submit_stale before checking for work_restart
(to keep Kano happy)
|
13 years ago |
Paul Sheppard
|
90d82aa61d
|
Revert to pre pool merge
|
13 years ago |
Con Kolivas
|
c027492fa4
|
Make the pools array a dynamically allocated array to allow unlimited pools to be added.
|
13 years ago |
Con Kolivas
|
5cf4b7c432
|
Make the devices array a dynamically allocated array of pointers to allow unlimited devices.
|
13 years ago |
Con Kolivas
|
17ba2dca63
|
Logic fail on queueing multiple requests at once. Just queue one at a time.
|
13 years ago |
Con Kolivas
|
42ea29ca4e
|
Use a queueing bool set under control_lock to prevent multiple calls to queue_request racing.
|
13 years ago |
Con Kolivas
|
63dd598e2a
|
Queue multiple requests at once when levels are low.
|
13 years ago |
Con Kolivas
|
757922e4ce
|
Use the work clone flag to determine if we should subtract it from the total queued variable and provide a subtract queued function to prevent looping over locked code.
|
13 years ago |
Con Kolivas
|
49dd8fb548
|
Don't decrement staged extras count from longpoll work.
|
13 years ago |
Con Kolivas
|
d93e5f710d
|
Count longpoll's contribution to the queue.
|
13 years ago |
Con Kolivas
|
05bc638d97
|
Increase queued count before pushing message.
|
13 years ago |
Con Kolivas
|
32f5272123
|
Revert "With better bounds on the amount of work cloned, there is no need to age work and ageing it was picking off master work items that could be further rolled."
This reverts commit 5d90c50fc0 .
|
13 years ago |
Con Kolivas
|
5d90c50fc0
|
With better bounds on the amount of work cloned, there is no need to age work and ageing it was picking off master work items that could be further rolled.
|
13 years ago |
Con Kolivas
|
47f66405c0
|
Alternatively check staged work count for rolltime capable pools when deciding to queue requests.
|
13 years ago |
Con Kolivas
|
efa9569b66
|
Test we have enough work queued for pools with and without rolltime capability.
|
13 years ago |
Con Kolivas
|
1bbc860a15
|
Don't count longpoll work as a staged extra work.
|
13 years ago |
Con Kolivas
|
ebaa615f6d
|
Count extra cloned work in the total queued count.
|
13 years ago |
Con Kolivas
|
74cd6548a9
|
Use a static base measurement difference of how many items to clone since requests_staged may not climb while rolling.
|
13 years ago |
Con Kolivas
|
7b57df1171
|
Allow 1/3 extra buffer of staged work when ageing it.
|
13 years ago |
Con Kolivas
|
53269a97f3
|
Revert "Simplify the total_queued count to those staged not cloned and remove the locking since it's no longer a critical value."
This reverts commit 9f811c528f .
|
13 years ago |