Mini/Mid Tower Chassis Recommendation: 16 x 2.5" SATA/SAS2

Discussion in 'Chassis and Enclosures' started by ReturnedSword, Aug 19, 2019.

  1. ReturnedSword

    ReturnedSword Active Member

    Joined:
    Jun 15, 2018
    Messages:
    125
    Likes Received:
    25
    This will be for a virtualization build sitting next to my workstation in my office, so rackmount is not ideal.

    Needs:
    • Minimum mATX support, ATX is fine
    • Somewhat compact volume
    • 16 x 2.5" SATA/SAS2 backplane (I'm fine with 5.25" bay adapters)
    I'm likely going to be running SATA SSDs here, with an aggregate capacity of about 12 TB using Samsung SM863 960GB or similar. SATA SSDs, even in a mirrored pool won't hit 10 Gbps (though close).

    16 x SM 863 is around $2,400, or would it make more sense to go with consumer 2TB NVMe for around 8 x 2TB for $2,000?

    My ideal chassis is the Dell T320 chassis with the Dell 2.5" bracket instead of the standard 3.5" brackets. There's the Dell T340 now also, which doesn't support the SFF brackets AFAIK. There's also the issue of not being able to find definitive information on whether or not a Dell T320 chassis supports standard mATX mounting positions; it's likely that it doesn't.
     
    #1
    ullbeking likes this.
  2. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    Check out Fractal Design's Define series. They are awesome for this kind of thing.
     
    #2
  3. croakz

    croakz Active Member

    Joined:
    Nov 8, 2015
    Messages:
    154
    Likes Received:
    31
    #3
  4. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    It is very interesting how people seem to have forgotten what RAID stands for.

    The "I" is for "inexpensive".

    Unless you have enterprise requirements, I would purchase decent, new SSD's, which will also come with a warranty.

    If the data is very important then consider 3x redundancy in mirroring. I have done this in particular situations.

    Please keep us updated!

    Edit: I just realized that you want to install 8x NVMe drives. It's a reasonable thing to want to do, but in reality it's usually a lot more complicated than you expect. It's also very expensive. On most systems the most U.2 NVMe drives you'll be able to fit via the native hardware is 2, or maybe 4.
     
    #4
    Last edited: Aug 20, 2019
  5. croakz

    croakz Active Member

    Joined:
    Nov 8, 2015
    Messages:
    154
    Likes Received:
    31
    @ReturnedSword

    You mentioned NVME, would you be doing drive cages for those too? I've only found these:

    MB699VP-B_ToughArmor Series_U.2/M.2 (SATA/PCIE NVME) ENCLOSURES OR KITS_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure

    And then you have to factor in the NVME controller, etc. But if you did go that route you could probably get 4 x 3.8TB Micron 5200 Pros for cheap.

    Micron 9200 PRO 3.8TB 2.5" U.2 PCIe NVMe Enterprise SSD Solid State Drive | eBay
     
    #5
  6. ReturnedSword

    ReturnedSword Active Member

    Joined:
    Jun 15, 2018
    Messages:
    125
    Likes Received:
    25
    @ullbeking Well, the "I" could also mean "independent," but that's an argument about semantics :D

    I have a couple spare Fractal Design R5 NIB even, but they're a bit big for what I want to do. It's a shame Fractal never made a mini version of the R5/R6 like they did with the R4 and below.

    @croakz If I went with the NVMe route, I'd probably use two quad carriers with consumer M.2 drives in a x16 slot. I'd like to use a Ryzen 3900X or 3950X, so then we run into the issue of PCIe lane configuration.

    For SATA, in my workstation, I agree that I'd rather go with brand new prosumer SATA SSDs. For a server though, since the bottleneck is at the interface, I'd rather pick up server pull drives that have a high endurance rating.

    For NVMe U.2, a controller isn't needed as long as the motherboard supports bifurcation in the slot. If bifurcation is supported a simple x16 to quad U.2 adapter is needed.

    I've looked into the Icy Dock brackets, and while I own some of their products and like them, the 16x bracket costs $300-350 which is getting into silly territory.

    I really want to use a mini/mid tower chassis, but increasingly it seems that a SM CSE-216 may make more sense unless there are any suggestions. My Google-fu is exhausted for today! :p
     
    #6
  7. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    I would LOVE to find a use for a chasses in the SM CSE-216 series because I think they are super cool in theory, but I simply can't understand how I can feasibly populate it with 24x 2.5" drives. Ideally I'd like to use inexpensive, high capacity SSD's and RAID1 or RAID10. The redundancy factor would be 2x or 3x (haven't decided yet).

    Does anybody have any suggestions?
     
    #7
  8. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    All-NVMe U.2 is going to be very expensive.

    If bifurcation is not supported then you will need to use a quad adapater with a PLX switch, i.e., something like this: AOC-SLG3-4E2P | Add-on Cards | Accessories | Products - Super Micro Computer, Inc. . This also have the advantage of only requiring an x8 PCI-e slot.

    Edit: You might also like to research this: Stornado - All SSD Storage Server
     
    #8
  9. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,241
    Likes Received:
    742
    I am not sure about your data security, sequential and random performance demands. From a data security point, you should use SSDs (or NVMe) with powerloss protection especially in a raid. Samsung SM 863 is ok, consumer SSD or NVMe are not.

    Next is sequential performance. A 6G SSD is at around 500 MB/s, a 12G SAS SSD is at 1 GB/s and a NVMe can go up to 2 GB/s and more. If you build a raid, performance scale with number of datadisks. So a raid-0 of two SSDs (or a raid-10 of 4 SSDs) is enough to saturate 10G sequentially. A massive raid of many disks outperforms 10G sequentially in any case.

    If you look at random performance, desktop SSDs with a steady write load can go down to a few thousand 4k iops per SSD. Enterprise SSds like the SM 863 are maybe at a 30-40k iops level. Nvme like an Intel Optane (simply the best) can go up to 500k iops. In a raid, iops is equal a singe disk in a Raid 5/6/Z1-3. With mirrors or multiple vdevs in ZFS iops scale with number of vdevs. So depending on your needs, you may be ok with SSDs.

    A special case is secure write (no dataloss on a crash during write) where high iops on qd1 and low latency is demanded for a good performance. This is where SSDs are mostly weak. Beside powerloss protection on the SSD/NVMe you need to protect the OS or raidadapter write cache. With hardware raid you need a BBU/Flash protection, with ZFS you can simple enable sync write. Be aware that a single SSD with 500 MB/s sequentially can go down to 50-150 MB on sync write. Only Optane can hold a near 10G performance from a single NVMe.

    Last a disk may fail. Only SSD and SAS is hot plug capably (There are effords with U.2 NVMe hotplug but this is more or less the future).

    From your initial thread I suppose you are ok with SSD. I would not use a multi raid-10 setup with 16 SSDs. I would (ZFS) use a 2 x z2 setup from 8 disks each.

    For SSD only, use 12G SAS HBAs like a BroadCom LSI 9300 eithe r2 x 8 port or 1 x 16 port.
     
    #9
    fossxplorer, ReturnedSword and croakz like this.
  10. ReturnedSword

    ReturnedSword Active Member

    Joined:
    Jun 15, 2018
    Messages:
    125
    Likes Received:
    25
    A CSE-216 can be had for the same price or cheaper than an Icy Dock 16x hot swap bracket. The 24x hot swap bays need not to be filled immediately. My issue with the CSE-216 is that it's a rackmount, and for this server I'd rather have it next to my workstation if possible.

    Hardware RAID is also out of the question. It's already established that software RAID on a HBA is generally is better since one doesn't need to deal with situations like a BBU failing or sourcing an identical RAID controller if one's existing RAID controller fails.

    @gea Thank you very much for your insight!

    This will be for my homelab, so U.2 drives probably would be overkill. I only need to approximate 10 Gbps to be happy; 40 Gbps would be a "nice to have" at this point. For consumer SSDs, AFAIK once the SLC cache is exhausted the performance will drop quite a bit. I trust your experience on this.

    For 16x SSD, I was thinking I would configure them as a pool of 2 x RAID6/RAIDZ2, 8 disks each. The LSI 9300 is nice and shiny, but wouldn't a SAS2 HBA be fine? A SM 863 is only capable of 6 Gbps, same as SAS2. I've seen recommendations for RAID10 or mirror vDevs all the time for IOPS, but my conclusion is that's just not that safe. The data stored can be replaced, but the hassle and effort dealing with a failed mirror taking out the entire pool costs time too.

    I'm hoping to avoid backing the pool with Optane, since this will add to the cost quite a bit, but if it can't be avoided I'm not opposed to it.

    Now if I were to use the disk array as a local store for VM store, I think that would preclude me from using ZFS since ESXi doesn't support ZFS AFAIK. I'd like to keep ESXi and the VM store on the local box, since that allows me not have to store VMs on another box (FreeNAS) and use iSCSI to mount the VM store over the network.
     
    #10
  11. Aluminum

    Aluminum Active Member

    Joined:
    Sep 7, 2012
    Messages:
    431
    Likes Received:
    45
    Consider skipping all the drive bay stuff and get a much neater system without a ton of wires - you can go full m.2, and have an excellent upgrade path for the future. There are also some pretty compact cases that skip drive bays.

    Threadripper boards can handle 11 NVMe natively with a pair of ~$60 4x16 cards. If you go headless (or x1 GPU for basic video), buy two more cards (half populated in the two x8 slots) that goes up to 15 NVMe drives directly attached to CPU lanes for an array. (14 if you don't want to boot from SATA)

    10GbE onboard option available too.

    There is also a mATX board that can have the same 15 NVMe slots (3x16) if you go headless. No x1 slot and no onboard 10GbE option though.

    If you insist on u.2 drives, the cables and drive bay adapters alone will cost $1000+, but honestly for a DIY storage server the consumer m.2 drives are incredibly bang/$.

    2TB E12 drives are $230 at microcenter and perform similar to a 970.

    Now, if you really want a lot of NVMe drives, lets talk Epyc :) also available on 1P ATX board.
     
    #11
  12. Netwerkz101

    Netwerkz101 Active Member

    Joined:
    Dec 27, 2015
    Messages:
    232
    Likes Received:
    54
    I had a long drawn-out reply, but after reading it, I thought I saw my inner a-hole coming out.
    That is not ever my intent so I'll just say:

    So confused!!!

    Save your money and use the existing (BNIB) Fractal Design R5 case
    (similar size to T320 - your "ideal chassis").

    Add two 1 x 5.25" to 8 x 2.5" drive cages
    (IcyDock is great but not only option)

    Follow the smart people's recommendations on storage 'cause I'm really lost there!
    Post up your build when done - with pics.
     
    #12
  13. ReturnedSword

    ReturnedSword Active Member

    Joined:
    Jun 15, 2018
    Messages:
    125
    Likes Received:
    25
    All replies are appreciated :)

    However this thread is going off the rails a bit - to refocus, this is a discussion mainly about getting 16x SATA/SAS2 hotswap in a reasonably sized case, for a reasonable price (e.g. an Icy Dock 2.5" 16x cage costs $300, more than a slightly used CSE-216). I will not be using consumer SSDs, nor will I be using a bunch of Optane. Optane DC for caching is fine if it can't be helped. Currently my drive choice is 16x SM 863. I already have separate NAS on the network, so using bulk 3.5" drives is not necessary or desired for this build, hence why I'm likely not to be using the Fractal R5. My second choice if a smallish chassis can't be identified is to go with a CSE-216 and just deal with it being in the rack further away.
     
    #13
  14. TLN

    TLN Active Member

    Joined:
    Feb 26, 2016
    Messages:
    349
    Likes Received:
    36
    I was able to score few PM1725 drives for approx $100/Tb recently. 5000MBps write speed! You can get five of those and run in Raid Z2, making approx 10Tb with two parity drives for under $2k. Get ATX mobo with 6PCIE slots, cause you'd want 40gbps NIC for this.
     
    #14
    Tha_14 likes this.
  15. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    TLN has kicked a goal!!
     
    #15
    Tha_14 likes this.
  16. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    2,099
    Likes Received:
    950
    #16
  17. ullbeking

    ullbeking Active Member

    Joined:
    Jul 28, 2017
    Messages:
    368
    Likes Received:
    29
    This actually seems like a good idea, and I never would have considered it for more than 0.1 s due to the price. But the total price is OK.

    The things I don't understand... @Marsh what exactly are you suggesting that the person you are addressing should use this mirrored pair for; and are you suggesting that these two huge, mirrored SSD's are the ONLY drives that are installed into the system? In other words, it's such a left of field -- but super cool -- solution, that I'm left wondering what the workflow or use case for the mirrored pair is..?
     
    #17
  18. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    2,099
    Likes Received:
    950
    Reading the first post, there was no mention about the workload, only "with an aggregate capacity of about 12 TB".

    No idea if it is sequential read write , random r/w , database workload , IOPS workload.

    I just "OMA" guess. If the data is critical , You could not go wrong with mirror pair.

    I do not run mirror or raid for my main home server, I do backup all my files nightly.
    I have 4 backup servers are using RAID 6 ( 16 x 4TB )

    Since , OP suggested "16 x SM 863 is around $2,400"
    Plus total hardware price to host 16 SSD including chassis, HBA , cables.

    I think $1400 x 2 = $2800 is not a bad deal.

    It would be dead simple use a single or 2 x 15TB SSD.
     
    #18
  19. TLN

    TLN Active Member

    Joined:
    Feb 26, 2016
    Messages:
    349
    Likes Received:
    36
    I mean, if data is critical you'd be beter having another copy somewhere. i.e. you're getting huge SSD specifically for speed/latency. In that case single Toshiba will be good enough.
    Single PM1725 gets 5000+MBPS, that's 40gbps. if you wanna access this over network you need infrastructure. If you dont need that - well' here you go. Double PM1725 might get 10000MBPS. In fact, I might test it later, just for fun., when I get second drive.
     
    #19
    Marsh likes this.
  20. zer0sum

    zer0sum Active Member

    Joined:
    Mar 8, 2013
    Messages:
    258
    Likes Received:
    77
    Silverstone CS380 is a pretty small case that might work for you :)
    SilverStone INTRODUCTION´╝ÜCS380

    I just bought one recently and it has the following:
    ATX motherboard
    Full size PSU's
    USB3 front ports
    8 x 3.5/2.5" hotswap backplane with 2 dedicated fans
    2 x 5.25" slots = 16 x SSD's
    Room for a few more SSD's here and there :)

    It loves some trivial cooling upgrades like sealing the bays with duct tape or similar, but it's been awesome so far for me
     
    #20
Similar Threads: Mini/Mid Tower
Forum Title Date
Chassis and Enclosures Building a FreeNAS Tower Oct 27, 2019
Chassis and Enclosures Looking for mini-tower chassis with about 5x 5.25" bays May 17, 2019
Chassis and Enclosures Pitstop's ssd jbod tower? May 9, 2019
Chassis and Enclosures Hunting for 12 bay case, rack or tower, for freenas Mar 1, 2019
Chassis and Enclosures chasis/tower or enclosure with SAS/8087 backplane Oct 27, 2018

Share This Page