LSI 2008 + 12x 3tb sas disks + lots of duct tape

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

smellyeagle

New Member
Jul 26, 2012
20
0
0
Just purchased a 2U dual node server with the plan to do (2) 6-disk RAID6 arrays (one for each node) for an esxi active directory implementation (among other things). Right now I'm in the process of deciding if I want a VM to manage the storage array or just use RAID 5 or 10 on the 2008. As it has to go live within a week or two I probably don't have time to wait for the new version of nexenta.

What are everyone's opinions?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
SAS2008 and RAID 5 (can't do RAID6) AVOID !!
Get a SAS2108 based controller for RAID 5/6

SAS2008 and RAID 10 = hard to beat performance wise.
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
I know SAS2008 can't do hardware RAID 6, but software RAID with napp-it, nexenta could with passthrough IT mode. Sorry about my confusing post, it was straight in my mind just leaning towards software over hardware RAID 5/10.
 
Last edited:

smellyeagle

New Member
Jul 26, 2012
20
0
0
Cool. It's settled then. SAS2008 in IT with software ZFS RAID-Z2. What software to use for RAID-Z2?
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
How are you going to hook up all 12x drives, as the SAS2008 is an 8 port controller, unless you go LSI9201 or LSI9202 with 16 ports or an expander ?
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
Oh no! I just realized I've got a big issue. The motherboard doesn't have a way to connect and power a regular sata disk to use as a boot disk. The backplane is only connected to the SAS2008 and I would not be able to put ESXi on it and then pass it. My solution was to get a 1GB SATA DOM (larger ones are very very expensive,1GB still starts around $30) which is big enough for ESXi but not also big enough for OI or something similar. Does anyone see a way around this?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Oh no! I just realized I've got a big issue. The motherboard doesn't have a way to connect and power a regular sata disk to use as a boot disk. The backplane is only connected to the SAS2008 and I would not be able to put ESXi on it and then pass it. My solution was to get a 1GB SATA DOM (larger ones are very very expensive,1GB still starts around $30) which is big enough for ESXi but not also big enough for OI or something similar. Does anyone see a way around this?
If you need a bootdisk for ESXi and a Storage VM:

I see two options (assume the backplane has 6 SAS/SATA ports)
- Place a regular 2,5" or 3,5" SSD disk inside (duct tape method), connected to onboard SATA or
- use 5 disks for SAS and 1 disks for onboard Sata booting

If the backplane has two 4Port SAS i-connectors, you can use one with 4 ports for SAS and
the other with two ports for onboard SATA (reverse breakout cable)
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
If you need a bootdisk for ESXi and a Storage VM:

I see two options (assume the backplane has 6 SAS/SATA ports)
- Place a regular 2,5" or 3,5" SSD disk inside (duct tape method), connected to onboard SATA or
- use 5 disks for SAS and 1 disks for onboard Sata booting

If the backplane has two 4Port SAS i-connectors, you can use one with 4 ports for SAS and
the other with two ports for onboard SATA (reverse breakout cable)
I don't have the hardware so I don't know if I'll have access to duct taping an sdd somewhere. My biggest concern is how to power it, as it seems like all power is on the backplane which is connected to the SAS2008.

I guess it would be possible to pass SAS2008 and then have 1 ssd or small hdd directly accessed by the VM and then have it use the other 5 hdd's to create an array. That's good, but I wouldn't be able to use RAID Z2 as I'd only have 5 hdds, not 6.

That limits my options to 5 disk raid z (with a few disks on hand for immediate replacement) or 6 disk hardware raid 10. What would you choose?
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
Pretty high-end server! What is the other node doing?
The two nodes are needed for AD replication. Obviously this system is a bit overkill for just an AD server (48 threads, at least 80gb RAM), and is planned to be used for various office sever needs, IE, share, backup, SFTP, etc. There are talks of virtualizing different office applications or even doing thin client type applications and this server would be the start of that project. One of the 6-disk arrays of SAN storage will be mostly dedicated to file serving needs throughout the network while the second array will be machine backup through crashplan.

I also found that the cost of doing 2 1u active directory servers with modest storage (the traditional route) was very similar in cost to building a beefy 2u server with lots of resources left over for virtualization needs. Now my environment will have a ton of flexibility with little added cost.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
If the motherboard supports it, you could get a SATA DOM with built-in power. Ask Supermicro if the MOBO powers pin7.

If not, there may be some cheap PCIe card that outputs power to a Molex port. eSATA cards take Molex *input* but they just may spit out some power on the same port and of course an SSD drive doesn't need much.

I don't have the hardware so I don't know if I'll have access to duct taping an sdd somewhere. My biggest concern is how to power it, as it seems like all power is on the backplane which is connected to the SAS2008.

I guess it would be possible to pass SAS2008 and then have 1 ssd or small hdd directly accessed by the VM and then have it use the other 5 hdd's to create an array. That's good, but I wouldn't be able to use RAID Z2 as I'd only have 5 hdds, not 6.

That limits my options to 5 disk raid z (with a few disks on hand for immediate replacement) or 6 disk hardware raid 10. What would you choose?
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
If the motherboard supports it, you could get a SATA DOM with built-in power. Ask Supermicro if the MOBO powers pin7.
I actually planned on using a 1GB SATA DOM for the ESXi install, but there's not enough space to put something like Nexenta on it and larger SATA DOMs are quite expensive. I'll look into the PCIe idea, not bad!
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I don't have the hardware so I don't know if I'll have access to duct taping an sdd somewhere. My biggest concern is how to power it, as it seems like all power is on the backplane which is connected to the SAS2008.

I guess it would be possible to pass SAS2008 and then have 1 ssd or small hdd directly accessed by the VM and then have it use the other 5 hdd's to create an array. That's good, but I wouldn't be able to use RAID Z2 as I'd only have 5 hdds, not 6.

That limits my options to 5 disk raid z (with a few disks on hand for immediate replacement) or 6 disk hardware raid 10. What would you choose?
If you need a pool that can survive a double disk failure, you can use a 3 way mirror (3 disks)
or a raid-z2 on 3 disks or more (5 disks are ok) so should be no problem to use one slot for a boot SSD.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
In most cases I would consider it a bit too reckless. When a disk does go bad in an array with such large disks, your rebuild time will be very long - about an entire day in fact. If another disk takes a dive during the rebuild window, or even if you have unrecoverable errors on just part of the disk during that time, then you are looking at a failed array requiring a full restore from backup. And since disks tend to fail in clumps as opposed to randomly, the odds are higher than you'd expect. ZFS will be a bit less likely to fail and faster to rebuild than hardware RAID5 I am told, especially if you have much less data than you do capacity, but the general problem is the same.

Personally, I can remember having to do about 10-15 RAID5 rebuilds due to disk failures in my lifetime - I am an accidental sysadmin so this number of quite low. Twice I've had a second disk go bad during the rebuild. Perhaps my luck has been much worse than the average, but my experience has certainly soured me on RAID5.

Would you consider it reckless to do 5 disk raid-z1 with 3tb sas disks with on-hand cold spare?
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
In most cases I would consider it a bit too reckless. When a disk does go bad in an array with such large disks, your rebuild time will be very long - about an entire day in fact. If another disk takes a dive during the rebuild window, or even if you have unrecoverable errors on just part of the disk during that time, then you are looking at a failed array requiring a full restore from backup. And since disks tend to fail in clumps as opposed to randomly, the odds are higher than you'd expect. ZFS will be a bit less likely to fail and faster to rebuild than hardware RAID5 I am told, especially if you have much less data than you do capacity, but the general problem is the same.

Personally, I can remember having to do about 10-15 RAID5 rebuilds due to disk failures in my lifetime - I am an accidental sysadmin so this number of quite low. Twice I've had a second disk go bad during the rebuild. Perhaps my luck has been much worse than the average, but my experience has certainly soured me on RAID5.
Thanks, that's just what I needed.
 

smellyeagle

New Member
Jul 26, 2012
20
0
0
I just got the server and it looks like there might be a few solutions for finding a drive to install OI & napp-it on without having to dip into a slot on my *hopefully* 6-disk array. There are multiple SATA ports, but no power cords coming from the Power Supplies. I could conceivably break out power from a pci port, but then I still need to find a spot to put the ssd. My best bet looks to be an open USB port on the MB. Is it ok to put OI on a USB? Any special type I should look for? How big do I need to go?

would 16gb be sufficient? Something like this might work, as it's SLC and should give decent enough reliability.
 
Last edited: