Norco 2440 Controller Thoughts

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Craash

Active Member
Apr 7, 2017
160
27
28
I'm looking for a little advice.

I'm upgrading an existing Norco 4220 platform with a new (to me) SuperMicro X9DR7-LN4F with 2 x E5-2670 (SR0KX) CPU's + 128GB DDR3 EEC.

This existing platform I'm upgrading includes an Adaptec RAID 52445 28 port 512MB SAS RAID controller and 20 2TB 7200RPM SATA drives. The 52445 is only a 3G SAS controller. I'd like to use this card/spinners. I'd rather add additional storage to other machines rather than spend more money on this box's HBA - unless I need to.

The purpose of this box will be an ESXi platform with about 10-15 VM's and lot's of storage. Nothing hugely critical or high traffic.

I do have a 10Gb LAN.

I plan on booting the Hypervisor from an SATA SSD via the onboard sata controller.

I will have a primary VM boot from the same SSD and pass the Adaptec through to it so I can mount all 20 drives in a ZFS array. I will pass this ZFS array back to the ESXi host via NFS for storage of the remaining VM's (delayed boot).

My primary question is should I consider replacing the Adaptec card with something different?

My secondary question is hould I consider an entire different receipe? :)

Thanks for the input,
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
This existing platform I'm upgrading includes an Adaptec RAID 52445 28 port 512MB SAS RAID controller and 20 2TB 7200RPM SATA drives. The 52445 is only a 3G SAS controller. I'd like to use this card/spinners. I'd rather add additional storage to other machines rather than spend more money on this box's HBA - unless I need to.

My primary question is should I consider replacing the Adaptec card with something different?

My secondary question is hould I consider an entire different receipe? :)
Non-expander controllers with high port counts are pretty rare, so replacing this might involve a bit of a search, or switching to an expander-based solution (not the greatest idea with SATA drives). You could save a fair amount on power by switching to higher-capacity drives. You have 40TB "weight before cooking". You could get that with 5 * 8TB drives, but you might need more drives depending on what sort of RAID[Z] level you plan on using. I don't know if your existing card has a JBOD mode (IT mode in LSI-speak) - if it doesn't, it is hiding some details from ZFS which can cause problems if there's an error down the road. 8-port IT mode internal SAS controllers from LSI are pretty cheap on the surplus market if you go that route.

To give you an idea of the potential power savings, I switched from 16 * 2TB 7200 SATA to 6 * 8TB 7200 SAS and power consumption dropped from 350W to 250W.
 
  • Like
Reactions: Craash

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
jbod/hba mode was introduced with the 7 series, the series 6 controller or older create a raid 0 for each hdd.
 
  • Like
Reactions: Craash

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
jbod/hba mode was introduced with the 7 series, the series 6 controller or older create a raid 0 for each hdd.
 

Craash

Active Member
Apr 7, 2017
160
27
28
jbod/hba mode was introduced with the 7 series, the series 6 controller or older create a raid 0 for each hdd.
Great post.

Not only does it save me the headache from determining that on my own, but it narrowed my (low/no cost) options.

Since I only create problems by trying to introduce ZFS into the mix (with the adaptec) I'm now leaning towards a single large Raid 1+0 controlled by the card and presented to the ESXi box as local storage.
 

Craash

Active Member
Apr 7, 2017
160
27
28
Very valid points. I wanted to stay away from expanders if possible. The MB does have 8 ports of onboard SAS, so I could add in another HBA or two to control all of the drives. I really did consider moving to higher density drives, but by the time you factor in disks needed for redundancy that's a pretty big $$ outlay just to get back to the storage point I'm at now. If I was close to running out of storage it would be easier. But I'm not even 1/2 utilized.

So, still thinking that RAID 10 at the card level might be my best bang for the buck.

I LOVED your "weight before cooking" comment. :)

Non-expander controllers with high port counts are pretty rare, so replacing this might involve a bit of a search, or switching to an expander-based solution (not the greatest idea with SATA drives). You could save a fair amount on power by switching to higher-capacity drives. You have 40TB "weight before cooking". You could get that with 5 * 8TB drives, but you might need more drives depending on what sort of RAID[Z] level you plan on using. I don't know if your existing card has a JBOD mode (IT mode in LSI-speak) - if it doesn't, it is hiding some details from ZFS which can cause problems if there's an error down the road. 8-port IT mode internal SAS controllers from LSI are pretty cheap on the surplus market if you go that route.

To give you an idea of the potential power savings, I switched from 16 * 2TB 7200 SATA to 6 * 8TB 7200 SAS and power consumption dropped from 350W to 250W.