Mini/Mid Tower Chassis Recommendation: 16 x 2.5" SATA/SAS2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
This will be for a virtualization build sitting next to my workstation in my office, so rackmount is not ideal.

Needs:
  • Minimum mATX support, ATX is fine
  • Somewhat compact volume
  • 16 x 2.5" SATA/SAS2 backplane (I'm fine with 5.25" bay adapters)
I'm likely going to be running SATA SSDs here, with an aggregate capacity of about 12 TB using Samsung SM863 960GB or similar. SATA SSDs, even in a mirrored pool won't hit 10 Gbps (though close).

16 x SM 863 is around $2,400, or would it make more sense to go with consumer 2TB NVMe for around 8 x 2TB for $2,000?

My ideal chassis is the Dell T320 chassis with the Dell 2.5" bracket instead of the standard 3.5" brackets. There's the Dell T340 now also, which doesn't support the SFF brackets AFAIK. There's also the issue of not being able to find definitive information on whether or not a Dell T320 chassis supports standard mATX mounting positions; it's likely that it doesn't.
 
  • Like
Reactions: ullbeking

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
16 x SM 863 is around $2,400, or would it make more sense to go with consumer 2TB NVMe for around 8 x 2TB for $2,000?
It is very interesting how people seem to have forgotten what RAID stands for.

The "I" is for "inexpensive".

Unless you have enterprise requirements, I would purchase decent, new SSD's, which will also come with a warranty.

If the data is very important then consider 3x redundancy in mirroring. I have done this in particular situations.

Please keep us updated!

Edit: I just realized that you want to install 8x NVMe drives. It's a reasonable thing to want to do, but in reality it's usually a lot more complicated than you expect. It's also very expensive. On most systems the most U.2 NVMe drives you'll be able to fit via the native hardware is 2, or maybe 4.
 
Last edited:

croakz

Active Member
Nov 8, 2015
176
59
28
Seattle, WA

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
@ullbeking Well, the "I" could also mean "independent," but that's an argument about semantics :D

I have a couple spare Fractal Design R5 NIB even, but they're a bit big for what I want to do. It's a shame Fractal never made a mini version of the R5/R6 like they did with the R4 and below.

@croakz If I went with the NVMe route, I'd probably use two quad carriers with consumer M.2 drives in a x16 slot. I'd like to use a Ryzen 3900X or 3950X, so then we run into the issue of PCIe lane configuration.

For SATA, in my workstation, I agree that I'd rather go with brand new prosumer SATA SSDs. For a server though, since the bottleneck is at the interface, I'd rather pick up server pull drives that have a high endurance rating.

For NVMe U.2, a controller isn't needed as long as the motherboard supports bifurcation in the slot. If bifurcation is supported a simple x16 to quad U.2 adapter is needed.

I've looked into the Icy Dock brackets, and while I own some of their products and like them, the 16x bracket costs $300-350 which is getting into silly territory.

I really want to use a mini/mid tower chassis, but increasingly it seems that a SM CSE-216 may make more sense unless there are any suggestions. My Google-fu is exhausted for today! :p
 

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
@ullbekingincreasingly it seems that a SM CSE-216 may make more sense unless there are any suggestions. My Google-fu is exhausted for today! :p
I would LOVE to find a use for a chasses in the SM CSE-216 series because I think they are super cool in theory, but I simply can't understand how I can feasibly populate it with 24x 2.5" drives. Ideally I'd like to use inexpensive, high capacity SSD's and RAID1 or RAID10. The redundancy factor would be 2x or 3x (haven't decided yet).

Does anybody have any suggestions?
 

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
For NVMe U.2, a controller isn't needed as long as the motherboard supports bifurcation in the slot. If bifurcation is supported a simple x16 to quad U.2 adapter is needed.
All-NVMe U.2 is going to be very expensive.

If bifurcation is not supported then you will need to use a quad adapater with a PLX switch, i.e., something like this: AOC-SLG3-4E2P | Add-on Cards | Accessories | Products - Super Micro Computer, Inc. . This also have the advantage of only requiring an x8 PCI-e slot.

Edit: You might also like to research this: Stornado - All SSD Storage Server
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I am not sure about your data security, sequential and random performance demands. From a data security point, you should use SSDs (or NVMe) with powerloss protection especially in a raid. Samsung SM 863 is ok, consumer SSD or NVMe are not.

Next is sequential performance. A 6G SSD is at around 500 MB/s, a 12G SAS SSD is at 1 GB/s and a NVMe can go up to 2 GB/s and more. If you build a raid, performance scale with number of datadisks. So a raid-0 of two SSDs (or a raid-10 of 4 SSDs) is enough to saturate 10G sequentially. A massive raid of many disks outperforms 10G sequentially in any case.

If you look at random performance, desktop SSDs with a steady write load can go down to a few thousand 4k iops per SSD. Enterprise SSds like the SM 863 are maybe at a 30-40k iops level. Nvme like an Intel Optane (simply the best) can go up to 500k iops. In a raid, iops is equal a singe disk in a Raid 5/6/Z1-3. With mirrors or multiple vdevs in ZFS iops scale with number of vdevs. So depending on your needs, you may be ok with SSDs.

A special case is secure write (no dataloss on a crash during write) where high iops on qd1 and low latency is demanded for a good performance. This is where SSDs are mostly weak. Beside powerloss protection on the SSD/NVMe you need to protect the OS or raidadapter write cache. With hardware raid you need a BBU/Flash protection, with ZFS you can simple enable sync write. Be aware that a single SSD with 500 MB/s sequentially can go down to 50-150 MB on sync write. Only Optane can hold a near 10G performance from a single NVMe.

Last a disk may fail. Only SSD and SAS is hot plug capably (There are effords with U.2 NVMe hotplug but this is more or less the future).

From your initial thread I suppose you are ok with SSD. I would not use a multi raid-10 setup with 16 SSDs. I would (ZFS) use a 2 x z2 setup from 8 disks each.

For SSD only, use 12G SAS HBAs like a BroadCom LSI 9300 eithe r2 x 8 port or 1 x 16 port.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
A CSE-216 can be had for the same price or cheaper than an Icy Dock 16x hot swap bracket. The 24x hot swap bays need not to be filled immediately. My issue with the CSE-216 is that it's a rackmount, and for this server I'd rather have it next to my workstation if possible.

Hardware RAID is also out of the question. It's already established that software RAID on a HBA is generally is better since one doesn't need to deal with situations like a BBU failing or sourcing an identical RAID controller if one's existing RAID controller fails.

@gea Thank you very much for your insight!

This will be for my homelab, so U.2 drives probably would be overkill. I only need to approximate 10 Gbps to be happy; 40 Gbps would be a "nice to have" at this point. For consumer SSDs, AFAIK once the SLC cache is exhausted the performance will drop quite a bit. I trust your experience on this.

For 16x SSD, I was thinking I would configure them as a pool of 2 x RAID6/RAIDZ2, 8 disks each. The LSI 9300 is nice and shiny, but wouldn't a SAS2 HBA be fine? A SM 863 is only capable of 6 Gbps, same as SAS2. I've seen recommendations for RAID10 or mirror vDevs all the time for IOPS, but my conclusion is that's just not that safe. The data stored can be replaced, but the hassle and effort dealing with a failed mirror taking out the entire pool costs time too.

I'm hoping to avoid backing the pool with Optane, since this will add to the cost quite a bit, but if it can't be avoided I'm not opposed to it.

Now if I were to use the disk array as a local store for VM store, I think that would preclude me from using ZFS since ESXi doesn't support ZFS AFAIK. I'd like to keep ESXi and the VM store on the local box, since that allows me not have to store VMs on another box (FreeNAS) and use iSCSI to mount the VM store over the network.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
Consider skipping all the drive bay stuff and get a much neater system without a ton of wires - you can go full m.2, and have an excellent upgrade path for the future. There are also some pretty compact cases that skip drive bays.

Edit: I just realized that you want to install 8x NVMe drives. It's a reasonable thing to want to do, but in reality it's usually a lot more complicated than you expect. It's also very expensive. On most systems the most U.2 NVMe drives you'll be able to fit via the native hardware is 2, or maybe 4.
Threadripper boards can handle 11 NVMe natively with a pair of ~$60 4x16 cards. If you go headless (or x1 GPU for basic video), buy two more cards (half populated in the two x8 slots) that goes up to 15 NVMe drives directly attached to CPU lanes for an array. (14 if you don't want to boot from SATA)

10GbE onboard option available too.

There is also a mATX board that can have the same 15 NVMe slots (3x16) if you go headless. No x1 slot and no onboard 10GbE option though.

If you insist on u.2 drives, the cables and drive bay adapters alone will cost $1000+, but honestly for a DIY storage server the consumer m.2 drives are incredibly bang/$.

2TB E12 drives are $230 at microcenter and perform similar to a 970.

Now, if you really want a lot of NVMe drives, lets talk Epyc :) also available on 1P ATX board.
 

Netwerkz101

Active Member
Dec 27, 2015
308
90
28
I had a long drawn-out reply, but after reading it, I thought I saw my inner a-hole coming out.
That is not ever my intent so I'll just say:

So confused!!!

Save your money and use the existing (BNIB) Fractal Design R5 case
(similar size to T320 - your "ideal chassis").

Add two 1 x 5.25" to 8 x 2.5" drive cages
(IcyDock is great but not only option)

Follow the smart people's recommendations on storage 'cause I'm really lost there!
Post up your build when done - with pics.
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
All replies are appreciated :)

However this thread is going off the rails a bit - to refocus, this is a discussion mainly about getting 16x SATA/SAS2 hotswap in a reasonably sized case, for a reasonable price (e.g. an Icy Dock 2.5" 16x cage costs $300, more than a slightly used CSE-216). I will not be using consumer SSDs, nor will I be using a bunch of Optane. Optane DC for caching is fine if it can't be helped. Currently my drive choice is 16x SM 863. I already have separate NAS on the network, so using bulk 3.5" drives is not necessary or desired for this build, hence why I'm likely not to be using the Fractal R5. My second choice if a smallish chassis can't be identified is to go with a CSE-216 and just deal with it being in the rack further away.
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
I was able to score few PM1725 drives for approx $100/Tb recently. 5000MBps write speed! You can get five of those and run in Raid Z2, making approx 10Tb with two parity drives for under $2k. Get ATX mobo with 6PCIE slots, cause you'd want 40gbps NIC for this.
 
  • Like
Reactions: Tha_14

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
I was able to score few PM1725 drives for approx $100/Tb recently. 5000MBps write speed! You can get five of those and run in Raid Z2, making approx 10Tb with two parity drives for under $2k. Get ATX mobo with 6PCIE slots, cause you'd want 40gbps NIC for this.
TLN has kicked a goal!!
 
  • Like
Reactions: Tha_14

ullbeking

Active Member
Jul 28, 2017
506
70
28
45
London
Toshiba PM5-R 15.36TB SAS3 SSDs (Read Intensive, 1DWPD, 27.37 PB) $1400
https://forums.servethehome.com/ind...-36tb-sas-ssds-16-32-64-128gb-ddr4-ram.25556/

Buy Two SSD as mirror pair.
This actually seems like a good idea, and I never would have considered it for more than 0.1 s due to the price. But the total price is OK.

The things I don't understand... @Marsh what exactly are you suggesting that the person you are addressing should use this mirrored pair for; and are you suggesting that these two huge, mirrored SSD's are the ONLY drives that are installed into the system? In other words, it's such a left of field -- but super cool -- solution, that I'm left wondering what the workflow or use case for the mirrored pair is..?
 

Marsh

Moderator
May 12, 2013
2,643
1,496
113
Reading the first post, there was no mention about the workload, only "with an aggregate capacity of about 12 TB".

No idea if it is sequential read write , random r/w , database workload , IOPS workload.

I just "OMA" guess. If the data is critical , You could not go wrong with mirror pair.

I do not run mirror or raid for my main home server, I do backup all my files nightly.
I have 4 backup servers are using RAID 6 ( 16 x 4TB )

Since , OP suggested "16 x SM 863 is around $2,400"
Plus total hardware price to host 16 SSD including chassis, HBA , cables.

I think $1400 x 2 = $2800 is not a bad deal.

It would be dead simple use a single or 2 x 15TB SSD.
 

TLN

Active Member
Feb 26, 2016
523
84
28
34
Reading the first post, there was no mention about the workload, only "with an aggregate capacity of about 12 TB".

No idea if it is sequential read write , random r/w , database workload , IOPS workload.

I just "OMA" guess. If the data is critical , You could not go wrong with mirror pair.

I do not run mirror or raid for my main home server, I do backup all my files nightly.
I have 4 backup servers are using RAID 6 ( 16 x 4TB )

Since , OP suggested "16 x SM 863 is around $2,400"
Plus total hardware price to host 16 SSD including chassis, HBA , cables.

I think $1400 x 2 = $2800 is not a bad deal.

It would be dead simple use a single or 2 x 15TB SSD.
I mean, if data is critical you'd be beter having another copy somewhere. i.e. you're getting huge SSD specifically for speed/latency. In that case single Toshiba will be good enough.
Single PM1725 gets 5000+MBPS, that's 40gbps. if you wanna access this over network you need infrastructure. If you dont need that - well' here you go. Double PM1725 might get 10000MBPS. In fact, I might test it later, just for fun., when I get second drive.
 
  • Like
Reactions: Marsh

zer0sum

Well-Known Member
Mar 8, 2013
849
473
63
Silverstone CS380 is a pretty small case that might work for you :)
SilverStone INTRODUCTION:CS380

I just bought one recently and it has the following:
ATX motherboard
Full size PSU's
USB3 front ports
8 x 3.5/2.5" hotswap backplane with 2 dedicated fans
2 x 5.25" slots = 16 x SSD's
Room for a few more SSD's here and there :)

It loves some trivial cooling upgrades like sealing the bays with duct tape or similar, but it's been awesome so far for me