Freenas build, with a rack chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

wajces

New Member
Dec 10, 2019
3
0
1
I only recently turned to Freenas for an offline backup machine, and am very happy with how Freenas is working. This PC's sole purpose is to periodically turn on and backup my primary NAS. This pc is currently made of desktop components with 14 hdd's, and to expand on this I'll need to look into larger cases such as the supermicro chassis.


I'm looking to rebuild this PC, starting off with a more suitable chassis. Current specs below, which is largely thrown together from old components
*Freenas
*dual core celeron g3930, 24gb ddr4, asus z270
*8x8tb in a Raidz2
*6x6tb in a Raidz1
*adaptec 16 port 71605 (was running hardware raid earlier, drives are now passed through)
*10gbit qsfp ConnectX-2 card (40gbit with infiniband, though i'm stuck with ethernet inside Freenas)

The dual core is nearly maxed out writing to the 8x8tb raidz2, and that array gets around 800MB/s


I'm not sure where to start on a larger, more appropriate build. It will need to backup my primary NAS which is currently good for a realized ~125TiB, and i'm looking for the best value/$ approach.

I was looking at a chassis like this 36 bay CSE-847E16-R1400UB. Is this the largest (affordable) chassis that can hold a motherboard? This carries more bays than the 24 bay version, and the 45 bay versions looks like it is for hdd's only. I live in Australia, and shipping is looking pricey at $400USD, so not sure what my alternatives are.
Supermicro 847E16-R1400UB BAREBONE 4U Server SAS2 Exp 6Gps 36x TRAYS Rail Kit | eBay

Will desktop ATX parts work in the supermicro chassis? I have an old x58/5820k cpu that I could retire into this proposed PC, unless I would be better off selling it and purchasing a used xeon and board. AM4 desktop gear also makes for a cheap alternative, with the bonus that the components are swappable with the other PC's i have running.

Ideally, i would like to swap to a 40gbit ConnectX-3 ethernet card, and have a cpu capable of maxing out this ~5GB/s worth of bandwidth.

The adaptec raid card runs well, though getting a hba with no bios could make the system simpler. Do the common LSI cards have bios's they have to post through?


To summarize the above, i'm looking for recommendations on
*a large case in the 24bay + range
*motherboard and cpu(s) capable of pushing 5GB/s throughput (would my 5820k be viable?)
*space/slot for a 40gbit ethernet card
*a more appropriate hba for the drives

Sorry if this post was better suited in the chassis related forum. Thanks!
 

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
comments:

Front Panel connection:

Using a non supermicro motherboard in a SM case is certainly doable. Note the front panel connection on the SM Case is a ribbon cable with very specific pin-outs. You'd either have to make your own breakout or purchase a pre-made breakout cable. The link is for reference so you know what I am talking about. I'm not endorsing this specific breakout cable.

2U motherboard tray vs. 4U case.

In the case of the 847, it has a 2U motherboard tray. The lower half of the 4U case is used for drives on the backside.

PCIE riser

With that specific case you'll need to figure out the PCIE riser situation depending on the motherboard you have. Here's one (US but part number may be handy) that has vertical slots but all HH. The slot configuration in the chassis is going to dictate what cards, HBA's etc.

CPU heat sink. needs to fit in a 2U chassis. the 847 can take (or may come with) a SM manufactured air duct though in a 2U you may not need it and with the fans in there it is possible to use a passive heatsink. YMMV

cpu performance
I think the 5820K would be plenty.

40gbe

I suspect you will not be able push 40Gbps with writes, think 100MBps per drive with spinners. But the NICs are cheap as are the QSFP cables. Are you looking at a private (p2p) connection between your primary NAS and this backup system? Also nothing wrong with playing with 40Gbe - just observing you may not get that level of performance, you may actually have a tough time purshing 10Gbe with primarily writes. Depending on what you want you can get 40Gbe connectx3 cards (may have to reflash them though) and a QSFP cable < $80 US.

additional:

HBA and SAS Expanders

That linked 847 will come with expanders for the drive bays. You can actually chain them all together and go with a single HBA. I'm pretty sure using spinners you won't see significant performance improvement connecting each expander to its own HBA. Mixing SSD's in there though may be a different story.

Motherbaord
I think this is going to depend greatly on the slot / bracket orientation on the version of the case you purchase. Also depending on specific chassis you may find it easier to just get a SM xeon board like an x9 e3-12xx board. in the us probably < 100 ... though memory ecc udimm's still are a bit pricey and can be hard to find. You can always start with what you have and move from there assuming it works in the chassis?

Noise
847 is going to be loud even with SQ power supplies. Just making sure you realized this.
 
  • Like
Reactions: wajces

wajces

New Member
Dec 10, 2019
3
0
1
You've covered all my queries, thanks!

40gbe

I suspect you will not be able push 40Gbps with writes, think 100MBps per drive with spinners. But the NICs are cheap as are the QSFP cables. Are you looking at a private (p2p) connection between your primary NAS and this backup system? Also nothing wrong with playing with 40Gbe - just observing you may not get that level of performance, you may actually have a tough time purshing 10Gbe with primarily writes. Depending on what you want you can get 40Gbe connectx3 cards (may have to reflash them though) and a QSFP cable < $80 US.
Understood. I picked up a pair of connectx2 qsfp cards to play with, as this was my first experience into afermarket NIC's. Now they are sitting inside my primary and backup NAS with a direct connection. Since freenas only 'supports' the 10gbe ethernet mode, i may look into replacing them with the connectx3 versions so I can jump back to 40gbe.

Planning for 40gbe worth of bandwidth is just thinking forward about future disk expansion and the life cycle of this system. The two NAS will be bouncing the same data back and forth a lot. Updating the backup. Running a hash or bit comparison to confirm the sync. And then when I would need to expand an array on either NAS, the expanded array will be fresh and I would need to sync a full array worth of data back (eg 100TiB of sustained read/writes). If i have a 24+ disk spinner array, hopefully i'll be getting near the 40gbe mark.

Noise
847 is going to be loud even with SQ power supplies. Just making sure you realized this.
I've seen youtube videos of these cases in general being loud, but hadn't thought over how to approach this. Neither have I thought over airflow/temps. I have run HP server power supplies in the past and seen how loud the 40mm fans get after hitting about half their rated power draw. I won't need the redundant power supply. Can any of the SM cases, even the 24 bay (846?), be retrofitted with a more typical ATX psu?
 

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
You've covered all my queries, thanks!



Understood. I picked up a pair of connectx2 qsfp cards to play with, as this was my first experience into afermarket NIC's. Now they are sitting inside my primary and backup NAS with a direct connection. Since freenas only 'supports' the 10gbe ethernet mode, i may look into replacing them with the connectx3 versions so I can jump back to 40gbe.

...

I've seen youtube videos of these cases in general being loud, but hadn't thought over how to approach this. Neither have I thought over airflow/temps. I have run HP server power supplies in the past and seen how loud the 40mm fans get after hitting about half their rated power draw. I won't need the redundant power supply. Can any of the SM cases, even the 24 bay (846?), be retrofitted with a more typical ATX psu?
on your 40gbe - have you seen this thread?

you can run one SM PSU in that chassis. will cut noise a little. but there are a lot of internal fans (I think all 80mm) in that chassis as it is similar to the CSE417 I have. Also you are talking about a lot of LFF drives. You do want the headroom in the power supply in case a drive crowbars and temporarily spikes its draw.

I'm sure someone has tried, maybe succeeded using a standard ATX power supply in SM cases but I have not experience there.

for large capacity LFF drives in 4U you may also want to search the forums here and eBay for supermicro chenbro. I have seen good things about that chassis over in the FreeNAS forums @iXSystems for noise and has redundant power supplies that are not. I think Chris Moore is running one. that said I have no idea how available they are in OZ.
 

wajces

New Member
Dec 10, 2019
3
0
1
Cheers! i'll do some searching on the Chenbro cases

on your 40gbe - have you seen this thread?
From my understanding, the connectx3 comes in a 10gbe ethernet model such as QCBT, and 40gbe model such as FCBT which can be cross flashed as per that thread. My cards are the previous generation connectx2 model (10gbit ethernet/40gbit infiniband), so i don't think they can be flashed to the newer 3rd generation? (40gbit ethernet/56gbit infiniband)
 

BeTeP

Well-Known Member
Mar 23, 2019
653
429
63
Q in QCBT stands for QDR (40Gbps) and F in FCBT stands for FDR (56Gbps).
 
  • Like
Reactions: wajces