DAS/JBOD Recommendation needed for C6220 v1

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

iLya

Member
Jun 22, 2016
48
5
8
New member here that is trying to build a home lab that runs Hyper-V using Dell C6220 and looking for some recommendation for how to configure my storage.
Here is my plan:

  1. Storage Spaces Direct with Scale Out File Server (SMB3) + Hyper-V cluster on the same nodes (Hyperconverged config)
  2. The server comes with dual 10GB NICs so I plan on getting a 10GB switch to provide the throughput for my S2D cluster. Eventually will convert to ConnectX-3 cards with RDMA. This will serve as the backbone for my VM traffic as well as storage. I also have an option to go with Dual port ConnectX-2 card to get a bit more throughput but still debating that part.
  3. The dual 1GB NICs will be used to provide connectivity to the rest of my house for normal server/media access.
  4. DAS for my nodes to provide the ability to have 3.5" 3TB SAS drives with 1-2 SSD drives for tiered storage for each node.
  5. Get something like an LSI SAS 9207-8E HBA to connect each node to the DAS.

Now here is where I need help. I was initially trying to figure out how to implement a Shared DAS where I could connect each one of the nodes through the SAS 9207-8E to a single DAS chassis but I am not able to find a chassis that would support the ability to connect 4 different servers to the same chassis.
So now I am looking for something like a 2U Supermicro JBOD that I would get for each node that could hold ~12 bays and connect each Dell node to its own dedicated JBOD.

Any recommendations are really appreciated.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
a lot of the 60 bay clam shell 4U DAS have 4 ports with a SAS Switch.

there are many 2U and many 4U JBOD with 3 connectors.

here is the hard part. the JBODs that have 3-4 connectors have to have a SAS Switch built into them or you need to use something like this
LSI Logic SAS6160 16-Ports SFF-8088 Switch LSI00269

DATAON 1600 DNS-1600 24-BAY JBOD 3.5 SAS SATA HARD DISK DRIVE STORAGE ARRAY

SAS disks have 2 ports on them so normally you can only connect 2 servers. to go larger you need a SAS switch (Internal or external) to allow the other servers to connect.

AIC XJ3000-4603 4U 60-Bay 3.5" SAS/SATA HD JBOD Ultra Density Storage Enclosure

Newisys NDS-4600-JD-03 4U 60-Drive 3.5" HDD Enclosure Fan Fault No Faceplate

to be honest. it would almost be cheaper (and easier) to buy 4 * SC216 with 12/24 bay drives and get a few SSD and the 4 TB 2.5" disks per server and use 2016 with Storage Spaces Direct.

Chris
 

iLya

Member
Jun 22, 2016
48
5
8
Thanks for the info Chris.
The AIC seems like an awesome enclosure but the seller does local pick-up only :( and it would not be very fault tolerant.

The DATAON appears to be a great deal and it offers the ability to connect two of the 4 nodes that I have and but I can't seem to find another place to buy these enclosures from.
I did find the NORCO DS-24D and a NORCO DS-12D that might do the trick as well.
Since I already have 12x3TB 3.5" SAS drives, I would like to stick with them and I also have some spare 1TB SATA drives as well if needed.

So do you think it would be a good solution to get the DATAON and the NORCO DS-12D to get this accomplished?
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
I agree with Chris on changing chassis, look up garlandcomputer on ebay and check out the SC826 12-bay chassis with the SAS-826A backplane, it's a 1 to 1 backplane, and you can get the chassis mostly barebones relatively inexpensively from this seller. The C6220 is kind of a junk show because of it's internal SATA II only capabilities.

Though now that I think about it, finding good dual LGA 2011 boards is kind of tricky, at a reasonable price. So maybe not. Just something to think about.
 

iLya

Member
Jun 22, 2016
48
5
8
Thanks for the recommendations guys.

So just to be clear, I am not planning to use the internal SATA II for anything but hosting my OS for each node in RAID 1 configuration for safety reasons.
I looked up the SC216 and it appears that MrRackables comes up with the most choices with the following configuration:

4 x SC216E16 PT-JBOD-CB2 HBA = $398.00 + $40 (shipping) * 4 = $1752.00
18 x 2.5" 2TB SAS drives = $280 + $10 (shipping) = $5220.00 (i can probably negotiate the price of shipping a bit but would not save me much)
4 x 2.5" 250GB SSD drives for caching = $100 * 4 = $400
4 x SFF-8088 cables = $10 * 4 = $40
Total raw capacity = 38TB
Total available bays = 96
Total possible capacity = 192TB (96 x 2.5" 2TB drives) = $26,880.00
4 Power cables
Total: ~$7412.00

The other option I can see is this:

DATAON unit = $399.99 + $120.38 (shipping) = $520.37
SC826E16-JBOD = $395 + $40 (shipping) = $435
4 x LSI SAS9207-8E HBA = $99.99 * 4 = $399.96 <- Faster performing chip to get more throughput
12 x 3.5" 3TB SAS drives = $0 (I already have them)
4 x 2.5" 250GB SSD drives for caching = $100 * 4 = $400
4 x SFF-8088 cables = $10 * 4 = $40
Total raw capacity = 38TB
Total available bays = 36
Total possible capacity = 240TB DATAON (24 x 6TB = $285 * 24 = $6,840) + Supermicro (12 x 8TB = $446 * 12 = $5,352) = $12,192.00
3 Power cables
Total: ~$1740.99

Am I missing anything?
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Look up garland computers, supermicro 2U 826 chassis with BPN-SAS-826A backplane. You get more flexibility with 3.5" bays, if you absolutely don't need 24 2.5" bays. They do around 220-230 + shipping for mostly barebone chassis I think? But again the real issue will be if you want dual LGA 2011, you'll have to sell the boards on ebay and hunt for new ones. If you're cool with 1366 gear, they're solid purchases.

Here you go, this is the cheapest config they ship, cheaper than what I quoted just above:

Supermicro 2U Server X8DTN+ Barebones __ Add CPU/Ram/HD
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,420
470
83
What are your space needs? And is this for a home lab? or a POC for business?

You seem to have SAS drives listed. your systems seem to be over built. I would recommend using SATA. Storage Spaces Direct is really meant to use SATA over SAS protocol.

Most of your cost is in doing things with very expensive SAS. and unless this project needs high perf with high reliability (i.e. dual port SAS drives). for a home lab you are going over kill.

Chris
 
Last edited:

iLya

Member
Jun 22, 2016
48
5
8
Heh, great more options :)

So I really didn't want to have a motherboard and was trying to do a simple JBOD that has the Sas26p Sff-8088 to Sas36p Sff-8087 that goes directly into the backplane and using the power supply to provide the power to the HDDs through one of those tiny power boards. I don't know what those boards are called.
I took a quick look at the Supermicro chassis that you linked to and it appears to have the SAS I backplane that would limit me to 3Gb and it also appears that there are 3 SAS connectors going to the board which means I can either only use 2 of them to connect to each node leaving me with only 8 bays per chassis or I have to find a different way of connecting to all 12 bays because the C6220 will have an HBA with only 2 mini SAS 8087 ports.
I can upgrade to the SAS2 backplane for ~$100 per chassis which appears to have the 6Gb transfer rate and allow me to connect 1 port to my HBA in each node and use all 12 bays through a single mini SAS connector to provide access to all of the 12 bays.
Then I would have to figure out how to power those backplanes and the HDDs without the motherboard and I might have a solution with 48 bays. Doing a quick calculation for the throughput I came up with the following numbers:
  • 1 mini SAS 8087 = 4 x 6Gb connections = 2.2 GB/s * 4 = 8.8 GB/s total throughput
  • 11 SAS x 175 MB/s sustained read/write = 1,925 MB/s + 1 250 GB SSD at 540 MB/s = 2,465 MB/s

So it would look something like this:

4 x Supermicro 2U server = $165 + $44 (shipping) = $836
4 x SAS2 backplane = $100 + $18 (shipping) = $472
4 x LSI SAS9207-8E HBA = $99.99 * 4 = $399.96
12 x 3.5" 3TB SAS drives = $0 (I already have them)
4 x 2.5" 250GB SSD drives for caching = $100 * 4 = $400
4 x SFF-8088 cables = $10 * 4 = $40
4 x Dual Mini Sas26p Sff-8088 to Sas36p Sff-8087 = $30 * 4 = $120
Total raw capacity = 38TB
Total available bays = 48
Total possible capacity = 384TB Supermicro (48 x 8TB = $446 * 48) = $21,408
4 Power cables
Total: ~$2,267.96

I also made a small calculation mistake from my last post where I said that it would be only 1 of the 12 bay servers, I would actually need to have 2 of them and that would bring the total to $2,175.99.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
BPN-SAS-826A is a pass through backplane meaning the speed is dependent on the controller not the backplane.
 

iLya

Member
Jun 22, 2016
48
5
8
BPN-SAS-826A is a pass through backplane meaning the speed is dependent on the controller not the backplane.
Gotcha, didn't think about that, but this would limit me at 8 bays per chassis and I guess I can always upgrade from there. This would save me ~$472 if I don't have to buy the backplanes.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Gotcha, didn't think about that, but this would limit me at 8 bays per chassis and I guess I can always upgrade from there. This would save me ~$472 if I don't have to buy the backplanes.
You can just pick up H200/H310 HBAs and flash them to IT mode as you need them. Or look for 12/16 port adapters. Or you can use internal expanders like RES2SV240.
 

iLya

Member
Jun 22, 2016
48
5
8
What are your space needs? And is this for a home lab? or a POC for business?

You seem to have SAS drives listed. your seem to be over built. I would recommend using SATA. Storage Spaces Direct is really meant to use SATA over SAS protocol.

Most of your cost is in doing things with very expensive SAS. and unless this project needs high perf with high reliability (i.e. dual port SAS drives). for a home lab you are going over kill.

Chris
This is for my home lab and I already have the drives which gives me ~36TB raw capacity.
I would ideally like to have ~20TB usable capacity for now.

I am also planning to use this as Software Defined Storage which heavily relies on either 10GBe network or RDMA capable configuration for acting as software RAID over Ethernet and for the remaining VM traffic.
I would like to have enough capacity to run ~40-60 VMs.
With my current server config I have enough raw capacity to have 60 VMs with 2CPUs, 8GB RAM, and ~300 GB of storage per VM.

To get decent enough performance from an IO perspective if I start with my 10GB NICs and the existing drives I will not be able to saturate any specific point of infrastructure but if I decide to expand, I might be able to get to that point and will switch over to RDMA.
From everything that I have learned so far it sound like if I go with my last configuration minus the SAS2 backplane, I should be able to get a good start and I might be able to get the Dell H200E cards for a few $'s cheaper than LSI SAS9207 cards and still get the required performance.

Thanks for all of your input, it seems that I might now have a good understanding of what I need to purchase to satisfy my requirements.