IO performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Deci

Active Member
Feb 15, 2015
197
69
28
As is the thing to do at 3am, i have been pondering the differences in IO performance be between the following setups, everything else being equal between the setups (for a VM store).

30x 3/4tb 3.5" 7200rpm capable of ~170MB/s (15x mirror vdev)
66x 1tb 2.5" 5400rpm capable of ~105MB/s (11x 6 disk z2 vdev)

Presumably the 66 drive would give better iops but how much (enough to justify the cost)? Would it be worth the cost difference over the 30 drive setup? (given a 36 drive supermicro case can be had much cheaper than a 72 drive 2.5" case)

As a side thing, should the vdevs be split differently for the setups?, ie, z2 vdevs instead of mirrors? ultimately there arent enough 1tb disks to break it into all mirrors and have the desired space so that would have to be z2 vdevs to give ~40tb usable, thats not the case for the 3/4tb disks. does the back end storage vdev configuration change drastically because of L2ARC and ZIL/SLOG additions? i see some people running a single z2/z3 for bulk storage behind the l2 and ZIL.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
If you want to evaluate IO performance of that many drives (of any type) you need to be more specific about how they are connected. Multiple HBAs? Expanders? What type of CPU (or more specifically how is PCIe architecture of CPU)? Single CPU or dual? If dual, NUMA config? There is a LOT more to IO performance than just # of spindles and spindle speed.
 
  • Like
Reactions: Patriot

Deci

Active Member
Feb 15, 2015
197
69
28
Single 12gb/s hba (onboard x10 series supermicro mobo with single e5-1620v3, integrated 12gb/s expanders as part of the backplanes in the supermicro chassis
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
As is the thing to do at 3am, i have been pondering the differences in IO performance be between the following setups, everything else being equal between the setups (for a VM store).

30x 3/4tb 3.5" 7200rpm capable of ~170MB/s (15x mirror vdev)
66x 1tb 2.5" 5400rpm capable of ~105MB/s (11x 6 disk z2 vdev)

Presumably the 66 drive would give better iops but how much (enough to justify the cost)? Would it be worth the cost difference over the 30 drive setup? (given a 36 drive supermicro case can be had much cheaper than a 72 drive 2.5" case)

As a side thing, should the vdevs be split differently for the setups?, ie, z2 vdevs instead of mirrors? ultimately there arent enough 1tb disks to break it into all mirrors and have the desired space so that would have to be z2 vdevs to give ~40tb usable, thats not the case for the 3/4tb disks. does the back end storage vdev configuration change drastically because of L2ARC and ZIL/SLOG additions? i see some people running a single z2/z3 for bulk storage behind the l2 and ZIL.
Just "thumb in the air", the 66 drives would provide about the same write IOPS but much higher read IOPS. With RAID cards having persistent caches, however, both would provide "more than enough" IOPS for most use cases, so the higher overall capacity of the 4TB option becomes appealing.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I'll give you another option to think about. With the 2.5" version, you could pick up 1TB Velociraptors for a bit more IOPS. There are some deals to be had on ebay if you are patient. I picked up a handful of them for $50 ea with Supermicro sleds and about 2.5 years of warranty left for $50 ea a few months ago.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Or...a very small number of SSD probably beats everything. And compared to either 30 or 66 drive options might not be more expensive.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Or...a very small number of SSD probably beats everything. And compared to either 30 or 66 drive options might not be more expensive.
^----- Exactly. And it can all be powered by a 200w PSU in a short depth chassis.
 

Deci

Active Member
Feb 15, 2015
197
69
28
Or...a very small number of SSD probably beats everything. And compared to either 30 or 66 drive options might not be more expensive.
It is massively more expensive when you factor in 40tb worth of space, even as a single z3 pool it requires 50ish drives.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
It is massively more expensive when you factor in 40tb worth of space, even as a single z3 pool it requires 50ish drives.
To be fair - you only asked about IO performance. Never stated that you had a raw volume need and weren't putting in the spindles just for performance.

But yes - I agree - if you need 40TB and are not Billionaire its probably not reasonable to go SSD.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
But yes - I agree - if you need 40TB and are not Billionaire its probably not reasonable to go SSD.
Well if you want IOPS you've probably short stroked those hard drives anyway. And 40TB of SSDs = $15,000. Billionaire sure but if you have a small business you'd spend that much for a decent backup server.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Intel RES3TV360, dual linked to LSI SAS3008 in IT mode. Yes - I agree - unacceptable performance loss.
 

Deci

Active Member
Feb 15, 2015
197
69
28
Just at a consumer level drive 40tb in Australia is about $27000, that's before you get to possible endurance issues as they are consumer drives. That's more than the budget for the entire server.

I want iops, but it needs to be within reason and budget constraints, I don't feel I need (nor can it be afforded) pure ssd at that scale, the 6 remaining bays in each chassis were intended for an all ssd option for the couple of machines that would benefit most from that.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
Just at a consumer level drive 40tb in Australia is about $27000, that's before you get to possible endurance issues as they are consumer drives.
Cheaper to buy a round trip flight!
 

Deci

Active Member
Feb 15, 2015
197
69
28
you have to take into account the conversion rate is about 1AUD = ~75c US + import fees/taxes
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
With that many drives I'm surprised people haven't suggested a 4 pool ZeusDrive for your SLOG, and a few TB of SSD for your L2ARC, and obviously RAM. From what I've gathered so far you should be able to get respectable IOPs as long as you're not running parity, and don't overload the RAM/L2ARC/SLOG.

Someone correct me if I'm wrong?

Seems like if you can't go all SSD then using the SSD where you can afford to will yield the best performance increase vs. x# spinning disks, and for the dollar.

OF course, disclaimer, I'm just learning XFS myself and this is my understanding.
 

Deci

Active Member
Feb 15, 2015
197
69
28
From my looking around (if i break out the rest of the specs for the machine as they stand) you can actually over allocate L2ARC as there is ~1GB of ram consumed for every 40GB of L2ARC, you exceed your total ram and you start overflowing to disk and this slows down the lookups for retrieval from L2

Operating System/ Storage Platform: likely end up on solaris 11.2 with nappit
CPU: Xeon e5-1620v3
Motherboard: Supermicro X10SRH-CF
Chassis: 36/72 drive supermicro 4U
Disks: As per thread + 2x 1TB SSD for L2ARC + 2x 32GB as OS mirror
RAM: 128GB DDR4 - 8x 16gb 2133mhz
Add-in Cards: FusionIO iodrive2 365GB for ZIL, FC card for IO output
Power Supply: as per chassis
 

Deci

Active Member
Feb 15, 2015
197
69
28
I'll give you another option to think about. With the 2.5" version, you could pick up 1TB Velociraptors for a bit more IOPS. There are some deals to be had on ebay if you are patient. I picked up a handful of them for $50 ea with Supermicro sleds and about 2.5 years of warranty left for $50 ea a few months ago.
I have been having a look around but there doesnt seem to be a whole lot up there at the moment, there is an unknown quantity in europe for ~110 US (unbranded OEM) so i might see if those can be had in a large quantity as i do have some 1tb raptors used elsewhere for things and they have been exceptionally reliable (however these were all NIB/full WD branded and a lot more expensive) worst case if a few are bad i can get local replacements at the higher cost and still have saved a heap.

If that is the way i end up going i will not be looking forward to having to remove them all from their 3.5" converter sleds, but they will give 1.5-2x the performance of the WD red 2.5" 1tb disks.
 

Deci

Active Member
Feb 15, 2015
197
69
28
No reply for the velociraptors as yet.

Anyone know of any good deals on 900gb/10k sas2 disks? (bulk quantity) looking around i have seen some (WD XE series, slightly upgraded sas version of the raptor) at a price thats possibly workable.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I can honestly say I've owned Raptors and VR since WD released them various versions, and quantities, and I've NEVER had one not die.
IE: 100% failure rate from WD Raptors over time. From in my desktop to home servers they all eventually start clicking and dieing.

Personally I'd stick with true enterprise drives with the quantity your after, and with my exp. with the raptors.
(Only 3.5", I haven't tested the 2.5" in their own trays w/forced air vs the 3.5' converter which may explain a heat issue but I didn't see any warnings.)