Storage Performance advice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Marsh

Moderator
May 12, 2013
2,645
1,496
113
There are times that you could find a single 1.2TB PCIe like Fusion IO drive between $300-$400 each.

Each month, I make a promise to myself not to buy anymore SSD until I have time to deploy it.
I am 61 years old, in my will, I will have my wife buried me with all the unused SSD
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
yeah there is that too. I try so hard not to buy all these cool shiny things so the wife will not be obligated to beat me for buying things I dont need.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
One question for the OP -- where are your swap files?

One of my lab VM servers currently runs off a single SSD which is filling up. I am reviewing my options for it as well. One thought that came to mind was to boot from a smallish SSD which would be faster than booting from USB and use the rest of that SSD for swap. That would move the swap files off the primary VM datastore SSD. I was thinking of 3-5 times RAM for the boot / swap SSD. That moves all predictive swapping off the primary VM SSD as well as any real swapping should I have underprovisioned any VM or get near capacity.

Unfortunately, I am not a VMware expert so have no idea whether this is a good idea or a hair-brained idea.

I looked briefly at adding L2arc to a NAS4Free box that is serving as the VM repository for a different ESX server and ran away screaming. All the FUD about making sure your L2arc was not too big or you would use all your RAM to store the l2arc tables and the sizing depending on your block size and version of zfs and .... eeeeeeek. I gave up. The rules on how to find out whether you are exceeding arc all seem to depend on your running solaris and being able to run some perl scripts that use sysctls that don't exist in openbsd so you have no way to tell if you need l2arc or, should you add it, whether it is helping or hurting.

So, I started thinking about where to put my swap files since even though there is no guidance on that either, you don't have lotsa folk who claim to know what they are talking about warning you not to do it.

More spindles / drives is always the easy / brute force way. Spreading your VMs across multiple datastores on multiple interfaces seems an way out for what you described. That does, of course, presume you have a place to put the drive and a place to plug it in.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
@fractal brings up a good point about L2ARC and RAM.... but it sounds like @modder man has plenty of ram to allocate for this purpose. I missed this thought on previous messages, thanks!
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
just low-balled a seller on some 160GB S3500 to use in workstations. got me wondering if one would make a decent l2arc of if it is to small or not fast enough? at $30ea I wasnt to worried.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
Too small for L2ARC in my opinion.

At $30/each I'd take some ;) if any leftover let me know please!
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
What do you look for in a drive to be used as an l2arc? I now have those S3500, the 960GB 853T, and an intel I think 530 480GB.

Also looks like there are 3 different modes that Fusion IO can be used in can I just set that with a windows workstation and the utility before putting it in my host?
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
Well I learned something else today that I had not thought of. When you add vdevs one at a time the data is not striped across all of the vdevs, there willn ot be a strip until each vdev has a balanced amount of data. Should I be handling this differently? I have been adding vdevs as my storage needs increase. Though it sounds like if I want performance to go up with each addition as well that will not work.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
@modder man I believe only "new" data is striped over all vdevs old data resides on existing vdevs only. To utilize the new vdev for "all" you will need to copy out, and copy back.

That's my understanding at-least.

Maybe it's changed or @gea has some input. Realistically it could be a 'resilver' button to click or command I'm not aware of.
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
If you add a new vdev, the pool is unbalanced what means that the old Bytes remain where they are. Only new or modified data is striped over all vdevs and takes an advantage of the new disks with the overall higher performance.

A rebalancing must be done manually if needed (on the long run a pool rebalances automatically but only with active data) and requires a copy action. You can for example rename a filesystem and replicate it to its former name, then destroy the renamed one.
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
A response from the ZFS man himself @gea thanks. That was the impression I was starting to get vdev1 is currently at %95 full while vdev2 is at about %20 full. Which from discussions with others is a very bad deal. The problem is that a majority of this is media so there is very little active data and very little chance it balances itself out. If I manually copy data off the array a few hundred GB at a time, and then put it back it will slowly balance out correct? at least for net new stuff?
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
It is quite bad if your pool was 95% full. Even a copy action will keep the data only on the new vdev while you may want to distribute it over all disks.

You must backup/ delete a part of your old files. Only then a copy can spread data over all disks.
 

modder man

Active Member
Jan 19, 2015
657
84
28
32
Yeah, that is what I was afraid of, It looks like I really need to be planning to vacate my array and allow it to rebuild.

Another interesting thing to me is that my arc hit rate has gotten very low, it is now at %68. Is there some good reads out there to try to better understand how or why this is happening. THe machine still has free RAM it can claim for ARC but hasnt done so.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
So it sounds like in your experience with a properly configured l2arc and a SLOG even a raidz array can provide plenty of performance for VM's? That is opposite of what most say. That said I have seen some of your build logs I trust the advice you give.
I've ran stripped mirror (raid-10) 7200rpm spinner 6 disk ZFS setups w/ a ZeusRAM as ZIL and it kicked butt and took names to the tune of 600R Mbps/400W MBps but then again the Zeusie was doin all the heavy lifting and then de-staging. Ran 30 VM's off that for a LONG time w/ very performant characteristics.

That being said I now run my VM pool off of 8x stripped mirror HUSSL 400GB devices w/ no L2ARC/ZIL and am a happy man.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Ahhh... in that case what about future proofing...

Ggetting a SuperMicro SC216 24x2.5" + the cheap JBOD controller (no ipmi / fan controls / etc but also CHEAP), and then you can slowly add more SSD as you can afford/need... create a 'new' pool of mirrored VDEVs of SSD, start with 1 more SSD like you have now if you want, add a SLOG later if needed.

- PCIE HBA w/External Ports
- Supermicro SC216 Chassis ($200-300 depending on PSU, backplane, etc.)
- 2nd SSD to start your new pool at minimum, 3x ssd to do a stripe in there

Then, as you need more capacity and performance add additional 2x SSD Vdev to your new VM pool.

Of course, the cheapest and simplest would be to add a PCIE device for fast SLOG and use your existing SSD for L2ARC and your existing pool for capacity :) but then you're limited again going fowrard.
Good tips but the only thing I worry or am concerned about is the compatibility of using PCI-e low latency devices such as Fusion-IO/NVMe devices as ZIL...support may be lacking depending on OS/stg platform and YMMV.

Just saying/warning :-D Wish I had my Fusion-IO device still to test in FreeNAS, happy w/ my sas slc ssd's as ZIL's but just curious if anyone knows if a fusion-io is natively supported by FreeNAS/OmniOS. Dunno why I didn't try this before. SMH
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
Fusion-IO -- Pretty much the most 'widely' accepted PCIE accelerator from my experience :) NVME def. an issue, but according to GEA fixed soooooooon ;):)

I just got a server back I'm building will be out of 3TB drives, and Fusion-IO ;) I'll post a build log.