Anyone ever thought about using a mining motherboard as a poor-man's NVME storage array?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nutsnax

Active Member
Nov 6, 2014
260
98
28
113
So there are tons of mining motherboards with 10+ PCI-E x1 slots out there on the super-cheap. There are also loads of enterprise-grade 22110 SSD's out there on the super-cheap. Most, if not all, of each of these are PCI-E 3.0 as I recall.

Has anyone experimented with using, say, an Asus B250 Mining Expert with a bunch of slower PCI-e 3.0 SSD's (i.e. Samsung PM963 or 953) and a swarm of m.2 PCI-E x1 adapters with a 25GB nic in the x16 slot? Most of these have onboard graphics anyway.

12g SAS drives can get crazy expensive whereas controllers are cheap; nvme drives can be cheap but expanders would probably be crazy expensive... but an asus B250 mining expert (or similar) with a ton of PM963's might actually be interesting. I think they sell retaining clips that should keep the m.2 sticks secured in the slot on their adapters (maybe).

Thoughts? or is this completely stupid?
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
I did, but I ended up using SATA SSDs with either a bunch of mirroring or raidz3 with a large enough spread to not wear them out a lot. That's even cheaper, and since you end up with the aggregate performance anyway, the difference between x1 PCIe and 16 SATA drives on a x4 is not all that exciting. I've even played around with it using consumer SSDs and even those work perfectly fine (with PLP caps) and don't degrade nearly as fast as everyone always seems to think. Some redundant arrays are at 80%+ health after multiple years of operation (bulk MinIO based usage patterns, and some VMs).
 
  • Like
Reactions: mach3.2 and gb00s

CyklonDX

Well-Known Member
Nov 8, 2022
848
279
63
While i haven't tried it, those mobos are usually not meant for high throughput; thus with nvme's you would likely suffer from signal issues if you were to populate them all, and then push data through them. (maybe if you use plx pcie cards - maybe it would be better)

Much better option would be getting a server board (best in servers), from typical 1u box you could get 4-6 x 4 nvme's (so either 16 or 24 nvmes) with quad nvme pci-e cards.

example (just example)
1676585471482.png
(2u box would have even more places to mount more devices - you would just have to buy raisers that give you enough slots you need)
1676585653002.png
if you use x16 with plx 4 nvme's this box would allow you to have around 8 cards, and your 25Gig one.
 
Last edited:

nutsnax

Active Member
Nov 6, 2014
260
98
28
113
While i haven't tried it, those mobos are usually not meant for high throughput; thus with nvme's you would likely suffer from signal issues if you were to populate them all, and then push data through them. (maybe if you use plx pcie cards - maybe it would be better)

Much better option would be getting a server board (best in servers), from typical 1u box you could get 4-6 x 4 nvme's (so either 16 or 24 nvmes) with quad nvme pci-e cards.

example (just example)
View attachment 27235
(2u box would have even more places to mount more devices - you would just have to buy raisers that give you enough slots you need)
View attachment 27236
if you use x16 with plx 4 nvme's this box would allow you to have around 8 cards, and your 25Gig one.
While that looks awesome and I like it a lot, it also looks expensive :) Also PLX probably means lots of money.

The idea here would be to spend as little as possible on the platform and more on the drives.

Though thinking about it, I think an old 2011 Xeon board might actually be better....
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
X10DRX or if you want to go older X9DRX
A STH member has one X10DRX for sale in the for sale section right now, and I'm tempted to sell my last 2 (I have 4U and 3U chassis for them specifically too)... but like this use-case, there's so many cool uses for lots of PCIE slots ;)

SATA SSD isn't really that exciting any board can do that with a HBA + Expander Backplane, or SAS Expander + Fanouts etc...
 

unwind-protect

Active Member
Mar 7, 2016
418
156
43
Boston
I find the proposal in the OP attractive. Although I do share stability concerns if you actually hammer all these slots at the same time.
 

CyklonDX

Well-Known Member
Nov 8, 2022
848
279
63
While that looks awesome and I like it a lot, it also looks expensive :) Also PLX probably means lots of money.

The idea here would be to spend as little as possible on the platform and more on the drives.

Though thinking about it, I think an old 2011 Xeon board might actually be better....
well you can go as far as 4x4x without plx on those boards for single x16/x8 port, the 24x can likely go 4x4x4x4x.
Even without plx; if you do that you will still come close to 1k usd for just nvmes (if nvme's are 100 usd each, and pcie cards for 4x nvme's are around 80 usd)
 

nutsnax

Active Member
Nov 6, 2014
260
98
28
113
well you can go as far as 4x4x without plx on those boards for single x16/x8 port, the 24x can likely go 4x4x4x4x.
Even without plx; if you do that you will still come close to 1k usd for just nvmes (if nvme's are 100 usd each, and pcie cards for 4x nvme's are around 80 usd)
if the board can bifurcate I'm using cards that cost me $20 each and they're rock solid.

I think the 2011 motherboard might be somewhat more appealing except probably in the power consumption department. I imagine a 2011 xeon v2 board with the lowest power ivy bridge you can find would use a good deal more power than one of those mining boards. But then you also get IPMI with the Ivy Bridge.

But an ivy bridge 2011 filled with cheap PM983's or 963's and a 25GB+ nic would be really cool.
 

CyklonDX

Well-Known Member
Nov 8, 2022
848
279
63
well the thing is
you need threads, and cpu power to write to those nvmes.

slower the cpu, slower the memory, the less you'll get even in networking dept.
You should want as many cores as you can get in this case.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
well the thing is
you need threads, and cpu power to write to those nvmes.

slower the cpu, slower the memory, the less you'll get even in networking dept.
You should want as many cores as you can get in this case.
ya, back when we were all testing NVME 5+ years ago now dual E5 could hit HIGH CPU load on numerous NVME, let alone 10+ ;) at once running.
File system plays a big part too
 

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
if the board can bifurcate I'm using cards that cost me $20 each and they're rock solid.

I think the 2011 motherboard might be somewhat more appealing except probably in the power consumption department. I imagine a 2011 xeon v2 board with the lowest power ivy bridge you can find would use a good deal more power than one of those mining boards. But then you also get IPMI with the Ivy Bridge.

But an ivy bridge 2011 filled with cheap PM983's or 963's and a 25GB+ nic would be really cool.
For LGA 2011 w/E5-26xx, when it comes to power consumption the generation is more important than the CPU model. For most people, a home system is going to be idle most of the time. At idle, pretty much all E5s of a given generation are going to have very similar consumption. Within a generation, there's a small difference between the lower core count processors, which use a smaller die, and the higher core count processors that use a larger die. But the difference is minimal. The more noticable difference is between the older (C60x + E5-26xx v1/2) generations and later (C61x +E5-26xx V3/4) generations. So, IMHO, using "the lowest power Ivy Bridge you can find" will still take a lot more power than a 10c/20t "small die" Broadwell chip which will also have at least twice the performance. Broadwell is old enough now that "low end" 10 core chips aren't super expensive anymore.

An example:
My current NAS is LGA2011-3 (Fujitsu D3348-B23v2) with an E5-2630v4. Its a workstation oriented motherboard, so no IPMI, and only one slot supporting bifurcation. But with an Asus Hyper m.2 x16 (or similar) card on that slot, it can still have 7 NVMe drives on PCIe 3.0 x4 connections (or 6 drives if an x16 slot is left open for a high speed NIC). There's also some PCIe 2.0 slots via the PCH that might work for a 10G NIC etc., and the 10 SATA ports typical with LGA 2011-3. So, not quite as much capacity as an X10DRX, but with the advantage of costing less and being a standard ATX form factor.

My current config has 6 NVMe and 6 SATA drives on TrueNAS Core, 64GB RAM, and 1G networking. Idle is ~30watts, which isn't outrageous at all.
 
Last edited:

Squash_Bugs

New Member
Dec 8, 2023
1
0
1
Depends on what you need, but Lenovo P520C MicroATX motherboards are getting inexpensive. Socket is LGA2066 Skylake S Xeon. 48 lanes from processor: 2 NVMe sockets, two x16 with bifurcation (cheap adapters with 4x NVMe https://www.aliexpress.us/item/3256805807880096.html), and a x8, not sure about bifurcation on the x8. Processor I used is W-2135 6 core 12 thread, but there are Mac Pro 8 cores available with less Ghz. I used a power supply adapter 24-pin to what Lenovo used, $1.82.

So 10 NVMe for $30 in adapters, and still a x8 free (and an x4 from the chipset). Or you can get the Dell 4x NVMe adapters sometimes. Too bad the fire sale on 1tb and 2tb NVMe are winding down.
 

zachj

Active Member
Apr 17, 2019
159
104
43
Is there a sandy bridge/ivy bridge motherboard that supports bifurcation? I wouldn’t be surprised if there is one but in general I don’t think bifurcation became ubiquitous prior to v3/v4 generation.

for the price difference I think Haswell or broadwell is a much better buy unless you’re specifically interested in 64GB ddr3 lrdimms (which are cheaper than 64gb ddr4 dimms).
 

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
Is there a sandy bridge/ivy bridge motherboard that supports bifurcation? I wouldn’t be surprised if there is one but in general I don’t think bifurcation became ubiquitous prior to v3/v4 generation.

for the price difference I think Haswell or broadwell is a much better buy unless you’re specifically interested in 64GB ddr3 lrdimms (which are cheaper than 64gb ddr4 dimms).
There's a few that do, probably needing a newer/newest BIOS version in some cases. For Supermicro, X9DRD-iF supports bifurcation using the latest BIOS, for example. But some other Supermicro X9 boards won't. For anyone wanting bifurcation in a Sandy/Ivy Bridge platform (regardless of brand), best to google a bit before buying to ensure the board they're interested in will work.

But, have to agree with @zachj , in most cases Haswell/Broadwell is probably a better choice.
 
  • Like
Reactions: T_Minus