Multiple HBA usage ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
Is it best to load up 2 or 3 HBAs in PCIe slots belonging to 1 CPU or should they be loaded some on CPU1 and some CPU2? Please explain reasoning :)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Cannot be answered without knowing your application and the details of your requirements. The best answer you can get to the question as written is 'it depends'.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
The system (CPU/Board) is on my bench right now for testing, figured i'd dropped the HBAs in there to test this JBOD I just finished putting together, I'm not sure the final motherboard or cpus but likely similar or newer 2P setup.

2P SM Motherboard w/2x E5-2670 v1's with 3x LSI HBA connected to JBOD SM chassis with 24x SSD in it (6x 8087 internal w/external 8088 adapter)/no expander.

Potentially a 4th HBA or PCIE device for ZFS SLOG device -- which, on this board will have to go on another CPUs PCIE slot.

I want to compare ZFS on Linux, ZFS on OmniOS and mdadm on linux before deploying this larger setup. The setup will be a pool of mirrors on zfs and raid 1+0 mdadm.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Serving ZFS locally (to apps or VMS on the same server) or over the network? If over the network then what NICs and where do you slot them?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
Serving ZFS locally (to apps or VMS on the same server) or over the network? If over the network then what NICs and where do you slot them?
The plan is to use ConnectX2 or ConnectX3 for IB 40G

I have another SM board with a lot more PCIE slots so I can technically run them all on 1 CPU or split them up, but again I'm not sure about the bifurcation on that larger board and affects it may have. And, if I do end up using that larger board (waiting on deal for chassis) I would want to take advantage of the other 5+ PCIE slots for other storage. (Fusion-IO or other HBA for spinners or a 2nd read-only SSD array)

Thank you!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
Think of it this way and the answer becomes somewhat clear. Your application (big ZFS array feeding 40Gbe links) is pretty much a memory pump. You get bits off the disk, repackage them, and pump them down the NIC (and in reverse). You have some limited processing to do checksum/CRC for ZFS. The Network checksums are offloaded to the NIC so they don't count. If your ZFS array is RaidZ then you also have some parity processing work to do.

With current gen CPUs this processing load is trivial. It won't even break a sweat. You don't require a large memory footprint unless you are doing ZFS Dedup. And cache management is a waste of time because you are pumping large blocks. So what you want to optimize are memory and IO latencies. Making your bits cross the QPI interconnect between CPUs to get from the HBA on one CPU to the NIC on the other is just wasted time.

So - if you can fit the HBAs and NICs on the PCIe links of one CPU you are better off doing so.