Mining Burst coins?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
GTX 1070 devices.txt

These settings were the best I found running 2 copies of gpuplotter simultaneously:

1 0 960 8 8192

change "1 0" to match your platform and device id. For multiple gpus, use the same settings but on multiple lines (different device ids on each line)

With 2 gpus, and 2 instances running, each instance could do 45k nonce / minute (90k total on 2 gpus).

With 2 gpus with one instance running on the above settings, I got 60k nonce / minute.

If you intend to use 1 plotter instance, these are not ideal settings. You should shoot for something like this for a single instance:

1 0 1024 16 8192

Running 4 instances at once (again, gtx 1070) I had good results with these devices.txt settings:

1 0 768 6 8192

I couldn't find any combination of settings that worked well for running 3 simultaneous processes with one or more GTX 1070's

1080ti can use larger numbers for the second-to-last field because it has 28 compute units vs 15 in the gtx 1070.

For two copies of the program running at once, the following worked well for a 1080ti:

1 0 1024 12 8192

For three copies of the program running at once on a 1080ti, this worked well:

1 0 1024 8 8192

All 1070 results had core clock +100mhz, memory +400mhz, fan speed max, and power limit 100%. I believe I did the same for 1080ti but don't remember.

All of the above was tested on a dual E5-2660v1 system. Because of that,, gpus were limited to pcie 2.0. You might see better results with v2 cpus and pcie 3.0.
 

modder man

Active Member
Jan 19, 2015
657
84
28
33
GTX 1070 devices.txt

These settings were the best I found running 2 copies of gpuplotter simultaneously:

1 0 960 8 8192

change "1 0" to match your platform and device id. For multiple gpus, use the same settings but on multiple lines (different device ids on each line)

With 2 gpus, and 2 instances running, each instance could do 45k nonce / minute (90k total on 2 gpus).

With 2 gpus with one instance running on the above settings, I got 60k nonce / minute.

If you intend to use 1 plotter instance, these are not ideal settings. You should shoot for something like this for a single instance:

1 0 1024 16 8192

Running 4 instances at once (again, gtx 1070) I had good results with these devices.txt settings:

1 0 768 6 8192

I couldn't find any combination of settings that worked well for running 3 simultaneous processes with one or more GTX 1070's

1080ti can use larger numbers for the second-to-last field because it has 28 compute units vs 15 in the gtx 1070.

For two copies of the program running at once, the following worked well for a 1080ti:

1 0 1024 12 8192

For three copies of the program running at once on a 1080ti, this worked well:

1 0 1024 8 8192

All 1070 results had core clock +100mhz, memory +400mhz, fan speed max, and power limit 100%. I believe I did the same for 1080ti but don't remember.

All of the above was tested on a dual E5-2660v1 system. Because of that,, gpus were limited to pcie 2.0. You might see better results with v2 cpus and pcie 3.0.


I changed to your config here. I have 4 8TB drives plotting simultaneously on a 1070ti all drives have been at %100 load and the GPU isnt hardly seeing any usage at all. So it is either broken and I dont know it yet or it is working very well. The miner clients all seem to be hung, this wouldnt be the first time I have seen that though. The disks are all writing away so I will give it time to see what happens.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I changed to your config here. I have 4 8TB drives plotting simultaneously on a 1070ti all drives have been at %100 load and the GPU isnt hardly seeing any usage at all. So it is either broken and I dont know it yet or it is working very well. The miner clients all seem to be hung, this wouldnt be the first time I have seen that though. The disks are all writing away so I will give it time to see what happens.
Running 4 instances at once, I use this for devices.txt:

1 0 768 6 8192

But, keep in mind I have two 1070 cards. A single process has difficulty maxing out two cards, but it is easier to see maximum performance of a single card with a single process. Therefore, increasing beyond 2 processes for 1 card is not a good idea unless your disk i/o is very low compared to your compute.

Also, keep in mind that "direct" mode is very slow (due to i/o) unless you set stagger = nonces. I noticed a large negative performance impact even when using a very fast nvme ssd. If you are writing direct, with nonces > stagger, to regular hard drives, I would expect very poor performance.

If you want stagger < nonces, use buffer mode and optimize the files afterwards. This should make better use of your gpu. Set your stagger to as large as your ram will allow.

In fact, with a single gpu and 4 drives, I would recommend the following:

Create a batch file that does the following:

* run one process at a time
* use a devices.txt with 1 0 1024 16 8192
* write a single large file (100gb - 500gb) to a single drive, use buffer mode, make stagger as big as your ram will allow
* after this one file is written, use an optimizer to take the source file from drive 1, and write the optimized version to drive 2. run this process in the background
* while the optimkze task is processing, plot another single large file to drive 3
* when done, optimize that file with a target disk of drive 4
* repeat the above, but write unoptimized files to drives 2 and 4 this time, and write the optimized versions to drives 1 and 3 (so that used disk space is balanced out)
* repeat until you don't have enough free disk space to do this, and finish off the last plot per drive using cpu plotting, or use a smaller spare drive to temporarily store the unoptimized files.
 

Joel

Active Member
Jan 30, 2015
856
199
43
42
I think we need a best known methods post for Windows / Linux so we can get more people iterating

I think what we need is...


A Docker image! :)

I'm not quite sure how to provision storage though.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
I gave CPU plotting a try with xplotter, and the results were pretty good and a little surprising. Xplotter was able to do 30k nonces / minute on a dual e5-2660v1. Also, it is pretty optimal on disk i/o as far as I can tell. The maximum performance I could consistently get out of 2x GTX 1070's was 90k nonces / minute, and it's not easy to keep the gpus fed with data to maintain those speeds.

Setting xplotter to use 32gb ram, and telling it to write a 174gb plot, it computes 16gb of nonces at a time, and writes them to disk every time it gets 16gb of them done. While it is writing this interim data, it continues computing more nonces. In order to encourage it to compute all nonces in one run, for a 174gb plot, I had to assign it 360GB ram. However, in that case, it didn't write any data until it had finished computing the entire 174gb plot. Because of that, I found that assigning 360gb ram was slower than assigning 32gb ram. I don't know how far this scales exactly, where lowering the ram vs the plot size causes things to run poorly.

Interestingly, you must set xplotter to allocate 2x the ram as the size of the plot. However, when it has enough ram to compute an entire plot in one go, it never uses more ram than the size of the plot (half what you assigned it).

For most people, I think the cpu plotting is going to be more appropriate than gpu plotting. The hassle and cost of setting up multiple-gpu systems, and keeping them well tuned to avoid i/o bottlenecks, is not going to make sense for most people. A small cluster of CPU plotters will perform quite well and be far more flexible and forgiving. As well, with GPU plotting, your choices are to create small optimized plots, create unoptimized plots (with or without optimizing them later), or plot using a server with a massive amount of ram, to create medium-sized optimized plots. It's not reasonable to install 380GB of ram, 2 GPUs and an NVMe SSD in a server just to create 174GB plot files 2-3 times as fast as you could without all those expensive upgrades.

Does anyone know of a linux plotter? That would make it a lot easier for me to repurpose a lot of servers temporarily for plotting. I don't really want to manage a large number of windows servers just to temporarily plot drives.
 
Last edited:
  • Like
Reactions: leebo_28

niekbergboer

Active Member
Jun 21, 2016
158
64
28
46
Switzerland
I
Does anyone know of a linux plotter? That would make it a lot easier for me to repurpose a lot of servers temporarily for plotting. I don't really want to manage a large number of windows servers just to temporarily plot drives.
The standard plotter is mdcct , and if you're willing to trade IOPS for RAM while plotting, you can also plot of maximum stagger using omdcct . I've looked in to gpuPlotGenerator, but you'll likely be IOPS limited anyway, so GPU does not make that much of a difference.
 

funkywizard

mmm.... bandwidth.
Jan 15, 2017
848
402
63
USA
ioflood.com
The standard plotter is mdcct , and if you're willing to trade IOPS for RAM while plotting, you can also plot of maximum stagger using omdcct . I've looked in to gpuPlotGenerator, but you'll likely be IOPS limited anyway, so GPU does not make that much of a difference.
Thanks! I've been using gpuPlotGenerator in windows. Somehow overlooked it having a linux version.

I did some tests today in windows, and found that Vega cards are darn fast for plotting, but, there seems to be a ceiling on performance for gpuplotgenerator based on single threaded cpu performance. Unless you run two instances at once, you can't max out even a single vega (or 2x gtx 1070), even with sufficient disk i/o (system has dual e5-2660v1). Using more than one vega gpu is a waste; even running two instances, the gpus will sit fairly idle.

That said, on today's test, I was i/o limited as well. A single intel dc s3700 400gb drive was sitting at 75% utilization consistently throughout the plotting process with two instances of gpu plotter and 1 or 2 vega gpus assigned. The drive can write at 400MB/sec, so it's no slouch.

Looking forward to trying out vega gpu plotting on a system with faster single threaded cpus (dual e5-2680v2), pcie 3.0, and nvme storage, and see if that makes any difference.

From my limited testing, ideal vega "devices.txt" settings seemed to be:

1 gpu plotter instance:
2048 64 8192

2 gpu plotter instances:
2048 32 8192

Should be able to do 90k nonce / minute with 1 vega gpu if you can keep the gpu fed with data.

I'll take a look at omdcct and mdcct, thanks for the tip there.