MVNe array planning help

int0x2e

Member
Dec 9, 2015
86
52
18
41
Hi everyone,
I've accumulated a number of PCI-E based flash storage cards which I now hope to put into good use.
I want to build a setup that will store both relatively static content (backup + media) on magnetic media with solid-state caching, as well as supporting VM storage and operation (all solid-state, with separate backup). This will likely be across more than one machine since my main limitation is the pci-e lane count, and I plan to use ZFS like many of you have already done.

Here's a quick list of the cards I have on hand for the project:
2x Fusion-IO ioDrive2 1.2 TB MLC
4x Sun F80 (LSI WarpDrive) 800GB MLC (internally: 4 x 200GB eMLC drives in IT-mode)
2x LSI WarpDrive 1.2 TB MLC (internally: 6 x 200GB eMLC drives in RAID-0)
1x LSI WarpDrive 1.86 TB MLC (internally: 4 x 465GB eMLC drives in RAID-0)

These will be placed on Intel W2600CR2 motherboards, which have 8 PCI-E slots, but I'll need to sacrifice one for the Infiniband network card (40gbps) to share the storage. So I need to optimize for up to 7 cards per machine.

Out of the bunch, it would seem like the Fusion-IO cards will give the lowest latency for both read and write operations, as well as the highest IOPS, and the LSI warpdrives will yield the best sequential performance.

Is there value in creating a high-speed slog device for both the magnetic media storage and the vm pool? Would it make sense to build it out of the fusion-io cards in raid-1?
About the vm pool - is it wise to use raid-1 (or even raid-5) for added reliability, or should I just rely on periodic backups?
What would be the best way to split the duties among the different machines? I originally hoped to have a consolidated storage/vm host as my main hub with additional nodes that will be brought up on demand. Is that still wise, or should I strive for a 2-node setup with similar specs and strive for a different model?

Any other thoughts? I'd really love to learn from everyone's experience... :)
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,001
911
113
NYC
@int0x2e Do you have 7x cards per machine?

Here's what I'd do, and the fact your are doing so much mix and match makes it slightly harder.

If you've got a big spindle array, use a F20 and set either four of the drives as L2ARC -----OR---- use 2 drives for L2ARC and 1-2 drives for a ZIL depending on if you want to mirror. If you don't want the mirrored ZIL, 3 L2ARC 1 ZIL. I would also RAID 10 the 1.86TB drive and add it to the big mirror/ stripe array on that host.

Fusion-IO I'd make a mirror and just VM host from it. That leave 3x F80's (12 drives) + 2x 1.2TB warp drives (12 drives) You could either do a 24 drive mirror/ stripe (like RAID 10), assign 200GB mirrors to each VM on different cards, or whatever.

I doubt you would notice much of a difference putting the FIO cards as cache for the SSDs. You get some benefit but you're adding another tier so that isn't going to help you much.

Another consideration is that the FIO cards are going to require specific OS versions while the LSI cards will work in almost anything.
 

int0x2e

Member
Dec 9, 2015
86
52
18
41
@MiniKnight, sorry if I wasn't clear, I've got a total of 9 cards, but can put up to 7 in a single machine.
I liked your idea of the 3x 800GB + 2x 1.2TB RAID 10. I will Definitely put more thought into it.
I also know that this a big-ol' mix-and-match here. I'm trying to make the most of it.