Containerized Zcash ZEC Mining with nvidia-docker

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rniet

New Member
Dec 31, 2017
2
0
1
63
I tried to understand this but got confused where can i find the docker images your talking about or do i have to make a windows image into the nvidia docker.
I got docker installed on my win 10 pro machine and would like to see a guide how to get this started ...
Installed the hyper-v as instructed by docker and now not sure how to get a image running into docker
Tried to use the command i found to get the dwarfpool zec image loaded but that ended in a ERROR that there is no latest version available ?

PS I have a gainward gtx 1070 phoenix gs which runs pretty cool, the original oc tool has a option to crank the fan up to some more which makes it run at a constant temp of 57 C at full load
 
Last edited:

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I think I I might go 1070 because when this crypto goes to crap I’ll have something to game on. Then again I don’t game much.


Sent from my iPhone using Tapatalk
Yeah I would say a 1070 is a good gaming card. 1080ti is overkill. Tried Fallout 4 at 4k resolution and all max settings, and it stayed pretty much maxed at 60fps nearly all the time, and the load on the gpu is far lower than with mining. I would guess half utilized.

So depending on budget, screen resolution, and so forth, either a 1070 or 1080 are good options. Either the 1070 or 1080 is good for mining ZEC (similar performance per dollar and per watt), whereas the 1070 is the better option for mining other altcoins.

Meanwhile, the 1060 is not going to be high end for gaming or for mining, but is good for driving high res displays for desktop use.
 

hifijames

Member
Dec 26, 2017
35
9
8
64
Knocked this one out this evening since I know we now need crypto mining to go along with our deep learning nvidia cards.

Available Images

zcash.flypool.org
Code:
nvidia-docker run -itd -e username=<insert_t_address> servethehome/zec_flypool_ewbf:cuda
dwarfpool
Code:
nvidia-docker run -itd -e username=<insert_t_address> servethehome/zec_dwarfpool_ewbf:cuda
You can generate an address using Kraken or similar and use that as the username. Note these are using the EWBF 0.3.4b3 miner. It is much faster than the nheqminer from what I have seen. The EWBF is supposedly optimized for Pascal GPUs.

Additional nvidia-docker Launch Options
-e templimit=t
Replace t with degrees C for a GPU temperature limit. Default is 90C
-e devfee=d
Replace d with devfee % you want to give. Default is 2%

ZEC Reference NVIDIA GPU Zcash Benchmarks
A key here is that ZEC mining seems to be much higher earning/ day than Monero on NVIDIA GPUs.
ZEC Mining Paypack and Profitability
Here is a calculator Mining Calculator Bitcoin, Ethereum, Litecoin, Dash and Monero

One quick observation, on payback periods excluding power consumption:
  • The NVIDIA GTX 1080 8GB reference is about 4.5 months payback.
  • The NVIDIA GTX 1070 is around 3.75 months.
  • The NVIDIA GTX 1060 6GB payback is about 3.5 months given current pricing.
  • The NVIDIA GTX 1050 Ti 4GB has a payback of 3 months
If you compare Monero mining on the GTX 1080 and 1070 v. Zcash at current prices (May 12, 2017) Zcash earns about 3x per day. On the GTX 1060 6GB it is about 2x per day. The GTX 1070 looks awesome if you got a low-cost card like this.
Thanks @Patrick for making it so easy for us noobs to start mining:)
The container has been running stably for the past 3 days on my 1080 Ti, delivering Sol/s as promised:)
Will this also work on the old GRID M40? Or I will be better off getting more modern cards?
Thank you again!
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Out of curiosity, what are the dependencies on the OS level? Can this run on Kubernetes/CoreOS/RancherOS?
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
In case anyone else encounters this issue, I have found that if I use nvidia-docker to start a container it will not show up in Portainer.

If I call it with "docker run --runtime=nvidia" instead, it will.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
@hifijames It will.

@Joel - that is an issue because NVIDIA changed nvidia-docker to nvidia-docker2.

Total pain! But part of that is moving to the docker run --runtime=nvidia instead of calling nvidia-docker run.
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Code:
CUDA: Device: 0 GeForce GTX 1050 Ti, 4038 MB i:64
CUDA: Device: 1 GeForce GTX 750 Ti, 2000 MB i:64
CUDA: Device: 0 User selected solver: 0
CUDA: Device: 1 User selected solver: 0


Temp: GPU0: 76C GPU1: 70C
GPU0: 169 Sol/s GPU1: 68 Sol/s
Total speed: 237 Sol/s
+-----+-------------+--------------+
| GPU | Power usage |  Efficiency  |
+-----+-------------+--------------+
|  0  |      0W     |  0.00 Sol/W  |
|  1  |     31W     |  2.19 Sol/W  |
+-----+-------------+--------------+


+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 750 Ti  Off  | 00000000:08:00.0 Off |                  N/A |
| 57%   73C    P0    31W /  38W |    543MiB /  2000MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 105...  Off  | 00000000:83:00.0 Off |                  N/A |
| 60%   79C    P0   ERR! /  75W |    561MiB /  4038MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      2480      C   ./miner                                      532MiB |
|    1      2480      C   ./miner                                      551MiB |
+-----------------------------------------------------------------------------+
I just added a nvidia 1050 ti 4gb card to my server and started up the docker. Any idea why the power is showing zero?
 

poutnik

Member
Apr 3, 2013
119
14
18
My 1050ti is also showing zero power (all the time, regardless of the driver version). I suspect it's because it's PCIe-only powered, not powered by a dedicated power cable. It may lack some hardware or whatever, but I assume it's mainly due to this (or else, all the other cards I have seen report some power usage)
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
My 1050ti is also showing zero power (all the time, regardless of the driver version). I suspect it's because it's PCIe-only powered, not powered by a dedicated power cable. It may lack some hardware or whatever, but I assume it's mainly due to this (or else, all the other cards I have seen report some power usage)
funny my 750ti shows it and its pci power only as well.
oh well at least i know its not just me.
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Same issue. When I first fired up the image last night I thought the 1050 wasn't working at all but now I see the hash rates.

147 Sols/s @ 55w power setting (no idea if this actually does anything). Lowest possible setting is 52.5w for me.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Just power limit. No other tweaks on the clocks. ASUS GTX 1060 6GB OC ROG
nvidia-smi -pl 78 (min allowed)
docker run -itd --name zdwfcuda1 --runtime=nvidia -e username=$wlt.gtx1050 servethehome/zec_dwarfpool_ewbf:cuda

Getting about 275 Sol/s @ 3.6+ Sol/W
Default power was about 125W - 315 Sols/S - @ 2.6 Sol/W


upload_2018-1-16_21-59-32.png
 

Wader

New Member
Dec 16, 2015
16
1
3
Hi Patrick,
On two computers I start docker gpu mining with "nvidia-docker run -itd -e username=USERNAME servethehome/zec_flypool_ewbf:cuda" and when I go to zcash.flypool I see two workers identified by their two container names.
I also have three cpu instances running, one each on the gpu connected cpu and one on a seperate cpu. I start these with "
docker run -itd -e username=USERNAME servethehome/zec_cpu_nheq_flypool". The mining of all three of these appears to be reflected in one worker at zcash.zpool, and it is called "default". The hash rate shown for it seems to be the sum of the three cpus.
Can something be done differently with the cpu containers so they each show up as individual worker, identified by container ID?
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Hi Patrick,
On two computers I start docker gpu mining with "nvidia-docker run -itd -e username=USERNAME servethehome/zec_flypool_ewbf:cuda" and when I go to zcash.flypool I see two workers identified by their two container names.
I also have three cpu instances running, one each on the gpu connected cpu and one on a seperate cpu. I start these with "
docker run -itd -e username=USERNAME servethehome/zec_cpu_nheq_flypool". The mining of all three of these appears to be reflected in one worker at zcash.zpool, and it is called "default". The hash rate shown for it seems to be the sum of the three cpus.
Can something be done differently with the cpu containers so they each show up as individual worker, identified by container ID?
docker run -itd -e username=zcashwallet.workerid
 
  • Like
Reactions: MiniKnight

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
nvidia gtx titan fail

Code:
Temp: GPU0: 77C 

GPU0: 97 Sol/s 

Total speed: 97 Sol/s

+-----+-------------+--------------+

| GPU | Power usage |  Efficiency  |

+-----+-------------+--------------+

|  0  |    197W     |  0.49 Sol/W  |

+-----+-------------+--------------+
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Geez, my $120 960s do better than that. Still not as good as a 1050 though.

Which version of Titan is that? Original?
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Had a similar experiment with a GTX 460 (I was thinking RX 460, but I paid $40 for it here on the forums, so no huge loss). I think it did 20 Sols/s @ 100w. It got taken out real quick. :)