After some recent discussions around here about link aggregation, I decided to do a bit of playing around and share the results. The hardware for this experiment consists of an IBM M1015 HBA, a SuperMicro SC846 chassis with the BPN-SAS2-846EL1 backplane, and lots of spinning disks.
Note: references to connector numbers below is according to the manual for the backplane, available at: http://www.supermicro.com/manuals/other/bpn-sas2-846el.pdf
The first thing I wanted to know if would work, is the 8-port-wide link aggregation back to the HBA - using two SFF-8087 cables from the HBA, connecting to ports 7 and 8 on the backplane, did result in an 8-wide link.
The next thing I wanted to know was whether I could also use the extra SFF-8087 connectors on the backplane to connect additional drives. So I connected a SFF-8087 breakout cable to port 9 on the backplane (7 and 8 both still connected to the HBA) and plugged 4 more HDDs into the other end. Happy to report that all 4 drives came up without issue - with a few of the hotswap bays still empty I've now got 25 drives connected through that backplane, with a 48Gbps link from the backplane back to my HBA.
The final bit was to put a load on it, and verify the performance. I quickly tossed together this tiny script, which combined with the existing monitoring I have on the server put a few nice big spikes into my throughput graphs.
It's not pretty, but should be very easy for anyone to modify to run against any subset of drives in a linux box - in my case I've left out any drives connected to the onboard SATA ports of the MB. (yes, they do extend into sdaa, sdab, etc. on my system now.) On the other hand, the graphs generated by Grafana are pretty - damn you @vanfawx for getting me addicted to this software. raintank
Edit: As noted below I had an error in the grafana graph definition which caused the sum of bandwidth to be significantly off. I've left the post above with the bad graph alone so that the replies below still make sense - the correct graph is here: raintank
Note: references to connector numbers below is according to the manual for the backplane, available at: http://www.supermicro.com/manuals/other/bpn-sas2-846el.pdf
The first thing I wanted to know if would work, is the 8-port-wide link aggregation back to the HBA - using two SFF-8087 cables from the HBA, connecting to ports 7 and 8 on the backplane, did result in an 8-wide link.
Code:
# ls /sys/class/sas_host/host0/device/
bsg phy-0:0 phy-0:1 phy-0:2 phy-0:3 phy-0:4 phy-0:5 phy-0:6 phy-0:7 port-0:0 sas_host scsi_host subsystem uevent
# ls /sys/class/sas_host/host0/device/port-0\:0/
expander-0:0 phy-0:0 phy-0:1 phy-0:2 phy-0:3 phy-0:4 phy-0:5 phy-0:6 phy-0:7 sas_port uevent
# cat /sys/class/sas_host/host0/device/port-0\:0/sas_port/port-0\:0/num_phys
8
The final bit was to put a load on it, and verify the performance. I quickly tossed together this tiny script, which combined with the existing monitoring I have on the server put a few nice big spikes into my throughput graphs.
Code:
# cat perftest.sh
for disk in a b c d e f g h i j k l m n o p q r s t u v w x y
do
dd if=/dev/sd$disk of=/dev/null bs=1M count=10000 iflag=nocache &
done
wait
Edit: As noted below I had an error in the grafana graph definition which caused the sum of bandwidth to be significantly off. I've left the post above with the bad graph alone so that the replies below still make sense - the correct graph is here: raintank
Last edited: