At my current job, we are no stranger to ZFS, OI/Nexenta, and Napp-It- we have several Petabytes of ZFS based storage for archival and processing systems. As I get ready to start replacing our aging VM servers, I realized their localized storage would be an incredible pain to deal with as we migrate on to new hardware next year. In the mean time, I have had no way to easily manage backups or VM transfers for maintance due to the non existent 4.1 licensing. To help temporarily deal with this I purchased 24 Intel 520 240GB SSD's with the intent to create a fast, inexpensive NFS host. I've built plenty of spindle based versions, but never a completely SSD based one. This will run 25-30 very low IO VM's and be directly connected via 10GBE to each VM host with existing copper/Intel nics.
Has anyone else ran all SSD's?
How did you split your pools up? Raid 10? Raid Z2?
Did you use Cache or ZIL drives? I have two 480GB drives available for Cache if I want and could use two of the 240's for ZIL.
I can't wait to iperf/bonnie test this. I realize my main limiting factory will be the single SFF connection to the backplane/MB. But compared to the 24x 320/500GB WD 2.5" notebook drives in raid 5 in each, this should be 10x better.
Has anyone else ran all SSD's?
How did you split your pools up? Raid 10? Raid Z2?
Did you use Cache or ZIL drives? I have two 480GB drives available for Cache if I want and could use two of the 240's for ZIL.
I can't wait to iperf/bonnie test this. I realize my main limiting factory will be the single SFF connection to the backplane/MB. But compared to the 24x 320/500GB WD 2.5" notebook drives in raid 5 in each, this should be 10x better.