Help Optimizing multipath.conf Values

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

herby

Active Member
Aug 18, 2013
187
53
28
Xenservers (XCP-ng 7.5)
-long-red: Supermicro H8DCL-6F, AMD 4332 HE (x2), 32GB ECC, ConnectX-2, Radeon RX 480 & HD 6450
-big-red: Supermicro H8SCM-F, AMD 4376 HE, 32GB ECC, ConnectX-2,

FreeNAS (11.1)
-blue: Supermicro H8SCM-F, Opteron 4376 HE, 48GB ECC, ConnectX-2, H310 (x2), 480GB DC S3500(x2),
240GB M500 (x2)
-SE-3016: 1TB WD10EACS (x2), 2TB 5k3000 (x2), 3TB DT01ACA300 (x6)

Network: Linksys E2000-RM (running Tomato), Mellanox SX6012, Dell 2816

My goal is to get the most out of my iSCSI connections from my Xenservers to my FreeNAS. Right now I'm feeding traffic over two asymmetrical paths using MPIO. One connection is GbE the other is 10 GbE. Below is the relevant section of my current multipath.conf file on my Xenservers:
Code:
device {
vendor         "FreeNAS"
product         "iSCSI Disk"
path_grouping_policy         group_by_prio
path_selector         "queue-length 0"
hardware_handler         "1 alua"
rr_weight         priorities
}
A little while ago when I was performing MPIO over two GbE connections I was using a "round-robin 0" path selector and sent equal traffic on both paths. After I added 10GbE NICs to my machines I used the default multipath.conf settings and would manually bring down the slower GbE interface after booting my Xenservers to ensure the 10GbE path was active and then bring the GbE interface back up (and passive by default) just for failover.

Lately I'm using the multipath.conf values above; which are measurably slower with traffic down both paths then if I force it all on to 10GbE. So, what changes would give me a 10:1 split of traffic (40:1 down the road); or alternatively how can I just go active/passive and have DM-Multipath choose the faster path without manual intervention?
 
Last edited: