a not so high density storage server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Deci

Active Member
Feb 15, 2015
197
69
28
this is the only system i have the sata doms installed in, so i have no idea if thats an issue on other boards.
 

Deci

Active Member
Feb 15, 2015
197
69
28
in the spirit of slamming data into and out of the drives as fast as it can, here are some basic array numbers

11 drive basic array - 1753mb/s write, 1446mb/s read
16 drive basic array - 2515mb/s write, 1710mb/s read
18 drive basic array - 2786mb/s write, 1765mb/s read
20 drive basic array - 2642mb/s write, 1802mb/s read
22 drive basic array - 2670mb/s write, 1783mb/s read
24 drive basic array - 2556mb/s write, 1734mb/s read
26 drive basic array - 2446mb/s write, 1718mb/s read
28 drive basic array - 2389mb/s write, 1723mb/s read
33 drive basic array - 2797mb/s write, 1815mb/s read

as i pass the 18 drive mark i start to run into overhead limits i guess as the performance starts going backwards until quite a few more drives in and then it spikes up again before dropping back, while read speeds continue to imporve, this would probably be fixed by adding more HBA cards or splitting the drives more evenly over the ones that are installed in the system (44x3.5" disks off the 9207 and 22x3.5" off the 9300 integrated chip with 24x spare 2.5" slots also on that chip), but the speeds they can supply already outpace the interface off to the virtual machines it will be serving anyway as thats only 16gbit, so not a particularly big issue.

one odd thing is that these drives appear to be writing faster than they can read which isnt something i have enountered before.
 

Deci

Active Member
Feb 15, 2015
197
69
28
the server is giving very consistent results with the balanced setup

3.5" Chassis 1 | vdev0-4 disks | vdev1-3 disks | vdev2-4 disks | vdev3-4 disks | vdev4-3 disks | vdev5-4 disks
3.5" Chassis 2 | vdev0-4 disks | vdev1-4 disks | vdev2-3 disks | vdev3-4 disks | vdev4-4 disks | vdev5-3 disks
3.5" Chassis 3 | vdev0-3 disks | vdev1-4 disks | vdev2-4 disks | vdev3-3 disks | vdev4-4 disks | vdev5-4 disks
 

Deci

Active Member
Feb 15, 2015
197
69
28
i have made some changes to the box, i am trying to get it working over 10gb ethernet for NFS however i am having some issues with throughput speeds, the maximum speeds i am getting to/from the NFS share is 396-408MB/s read and write speeds, for testing i have disabled sync with no changes to speeds.

hardware is the same except the FC cards have come out and an intel X520 NIC added, from the storage box it goes directly into a Cisco 6500/X6708-10GE card, back out of that same card and into a HP C7000 blade centre with flexfabric 10gb/24 port VC modules, 1 VC module/blade ethernet port dedicated to storage traffic and the MTU set to 9000 for that port/vswitch/vmkernel.

the cisco has an MTU set of 9216 as that is the only option you can set so far as i am aware for jumbo frames, the flex fabric modules do not have any settings i have ever found for jumbo frames, everything i have seen suggests they simply pass on what they are given as they are given it.

i have tried omnios and solaris 11.2, both use the ixgbe driver for the x520 and everything i have found says the only thing required for jumbo frames to work is that you change the MTU to a value above 1500, so it has been set by doing the following, changing "default_mtu = 1500;" to "default_mtu = 9000;" in /kernel/drv/ixgbe.conf and the below 3 commands, this shows an MTU of 9000 in "ifconfig ixgbe0"

ipadm delete-if ixgbe0
dladm set-linkprop -p mtu=9000 ixgbe0
ipadm create-if ixgbe0

anyone have any ideas as to what might be wrong/incorrectly setup?
 
  • Like
Reactions: T_Minus

Quasduco

Active Member
Nov 16, 2015
129
47
28
113
Tennessee
i have made some changes to the box, i am trying to get it working over 10gb ethernet for NFS however i am having some issues with throughput speeds, the maximum speeds i am getting to/from the NFS share is 396-408MB/s read and write speeds, for testing i have disabled sync with no changes to speeds.

hardware is the same except the FC cards have come out and an intel X520 NIC added, from the storage box it goes directly into a Cisco 6500/X6708-10GE card, back out of that same card and into a HP C7000 blade centre with flexfabric 10gb/24 port VC modules, 1 VC module/blade ethernet port dedicated to storage traffic and the MTU set to 9000 for that port/vswitch/vmkernel.

the cisco has an MTU set of 9216 as that is the only option you can set so far as i am aware for jumbo frames, the flex fabric modules do not have any settings i have ever found for jumbo frames, everything i have seen suggests they simply pass on what they are given as they are given it.

i have tried omnios and solaris 11.2, both use the ixgbe driver for the x520 and everything i have found says the only thing required for jumbo frames to work is that you change the MTU to a value above 1500, so it has been set by doing the following, changing "default_mtu = 1500;" to "default_mtu = 9000;" in /kernel/drv/ixgbe.conf and the below 3 commands, this shows an MTU of 9000 in "ifconfig ixgbe0"

ipadm delete-if ixgbe0
dladm set-linkprop -p mtu=9000 ixgbe0
ipadm create-if ixgbe0

anyone have any ideas as to what might be wrong/incorrectly setup?
Are you using fiber or dacs?

What speeds are you getting with non jumbo frames?

Have you run iperf to see network connectivity is good? I get ~9.4Gb on x520s with non jumbo frames on iperf, no tuning at all.
 

Deci

Active Member
Feb 15, 2015
197
69
28
Are you using fiber or dacs?

What speeds are you getting with non jumbo frames?

Have you run iperf to see network connectivity is good? I get ~9.4Gb on x520s with non jumbo frames on iperf, no tuning at all.
fiber.

jumbo and non jumbo frames vary in speed by only ~10-15MB/s

i was having some issues with dns and wasnt able to install iperf, i have previously tested it and without jumbo frames it was peaking at 8.2Gb or there abouts with 2-3 streams, single stream was showing 5.6Gb (both servers using x520 cards via the 6500, one on win2012 and one on solaris 11.2). for now i have wiped it and installed storage server 2012 r2, will give that a test over the next few days and see what results i get with that.
 

Deci

Active Member
Feb 15, 2015
197
69
28
no change with windows, i have changed it back to solaris and removed the 6500 switch, the x520 card is now directly connected to the HP virtual connect module however i am still seeing limits of 400MB/s both ways.

iperf testing from one blade to another blade within the same enclosure is showing the same limits, anyone with HP blade centres come across this before? doesnt seem to matter if its jumbo frames or not, the virtual connect isnt splitting the single 10gb port up, its got that whole ports 10gbit allocated to the one uplink

 

Deci

Active Member
Feb 15, 2015
197
69
28
i added a pfsense virtual machine into the mix to hand out DHCP for all the connections on the storage side, this has gone from getting 4gbit to near full 10gbit, all i can think of is that it must have been trying to do something with the default gateway for the esxi management which is on an entirely different network without any kind of routing between them.

 

Deci

Active Member
Feb 15, 2015
197
69
28
The box is still running strong, since then i have migrated everything off it to an almost identical copy of this machine located on the other side of town so i could change the zfs storage configuration, both machines act as replication points for each other (via the napp-it extension) over a fiber link we have between the sites (they are replicating changes every couple of hours).

It has been running for a while with 33x mirror vdevs for the main storage, some 10k rpm 2.5" drives and some of the SM843T drives from the deals thread have been added for dedicated use by an SQL server, (SSD for DB storage and 10k rpm drives for log storage), overall the IO performance of the virtual machines is noticeably more responsive since changing to all mirrors instead of z2/z3 vdevs. The peak read/write speeds via NFS have not changed much but total usable space has taken a fair hit though.

It has been really solid reliability wise, i had a few drives go early on but that is to be expected with the number of drives in it and they were all replaced under warranty, since then it hasn't given me any issues.
 
  • Like
Reactions: SlickNetAaron