Hi everyone,
I've accumulated a number of PCI-E based flash storage cards which I now hope to put into good use.
I want to build a setup that will store both relatively static content (backup + media) on magnetic media with solid-state caching, as well as supporting VM storage and operation (all solid-state, with separate backup). This will likely be across more than one machine since my main limitation is the pci-e lane count, and I plan to use ZFS like many of you have already done.
Here's a quick list of the cards I have on hand for the project:
2x Fusion-IO ioDrive2 1.2 TB MLC
4x Sun F80 (LSI WarpDrive) 800GB MLC (internally: 4 x 200GB eMLC drives in IT-mode)
2x LSI WarpDrive 1.2 TB MLC (internally: 6 x 200GB eMLC drives in RAID-0)
1x LSI WarpDrive 1.86 TB MLC (internally: 4 x 465GB eMLC drives in RAID-0)
These will be placed on Intel W2600CR2 motherboards, which have 8 PCI-E slots, but I'll need to sacrifice one for the Infiniband network card (40gbps) to share the storage. So I need to optimize for up to 7 cards per machine.
Out of the bunch, it would seem like the Fusion-IO cards will give the lowest latency for both read and write operations, as well as the highest IOPS, and the LSI warpdrives will yield the best sequential performance.
Is there value in creating a high-speed slog device for both the magnetic media storage and the vm pool? Would it make sense to build it out of the fusion-io cards in raid-1?
About the vm pool - is it wise to use raid-1 (or even raid-5) for added reliability, or should I just rely on periodic backups?
What would be the best way to split the duties among the different machines? I originally hoped to have a consolidated storage/vm host as my main hub with additional nodes that will be brought up on demand. Is that still wise, or should I strive for a 2-node setup with similar specs and strive for a different model?
Any other thoughts? I'd really love to learn from everyone's experience...
I've accumulated a number of PCI-E based flash storage cards which I now hope to put into good use.
I want to build a setup that will store both relatively static content (backup + media) on magnetic media with solid-state caching, as well as supporting VM storage and operation (all solid-state, with separate backup). This will likely be across more than one machine since my main limitation is the pci-e lane count, and I plan to use ZFS like many of you have already done.
Here's a quick list of the cards I have on hand for the project:
2x Fusion-IO ioDrive2 1.2 TB MLC
4x Sun F80 (LSI WarpDrive) 800GB MLC (internally: 4 x 200GB eMLC drives in IT-mode)
2x LSI WarpDrive 1.2 TB MLC (internally: 6 x 200GB eMLC drives in RAID-0)
1x LSI WarpDrive 1.86 TB MLC (internally: 4 x 465GB eMLC drives in RAID-0)
These will be placed on Intel W2600CR2 motherboards, which have 8 PCI-E slots, but I'll need to sacrifice one for the Infiniband network card (40gbps) to share the storage. So I need to optimize for up to 7 cards per machine.
Out of the bunch, it would seem like the Fusion-IO cards will give the lowest latency for both read and write operations, as well as the highest IOPS, and the LSI warpdrives will yield the best sequential performance.
Is there value in creating a high-speed slog device for both the magnetic media storage and the vm pool? Would it make sense to build it out of the fusion-io cards in raid-1?
About the vm pool - is it wise to use raid-1 (or even raid-5) for added reliability, or should I just rely on periodic backups?
What would be the best way to split the duties among the different machines? I originally hoped to have a consolidated storage/vm host as my main hub with additional nodes that will be brought up on demand. Is that still wise, or should I strive for a 2-node setup with similar specs and strive for a different model?
Any other thoughts? I'd really love to learn from everyone's experience...
Last edited: