I have a collection of hardware I want to use for a few purposes, my original plan was to create a couple different purpose specific configs but I am rethinking it. In the end I need a virtualization platform that can run 80-100 VM's (most very lite weight, 5-8 with potentially high IOPS requirements ~10k-15k sustained) and a 100-150 TB primarily iSCSI SAN with SMB and NFS smaller shares.
The minimum total IOPs needed are ~ 40k-60k r&w based on an existing C7000 & tiered SAN cluster I am planning to replace or upgrade (current solution is maxed around 20k-30k depending on flash hit rate - the spinning disks are only good for ~2.5k). Initially I was going to use the P822 smart cache feature to create tiered storage using CentOS as the host OS for a home brew iSCSI storage solution on the E5-v1 server and use the other G8 for a virtualization host sharing the storage server with the other network hosts.
I think I've reached a paralysis point after going through all of the options and available solutions and wanting to try everything under the sun. After some research I am wary of the affordable high IOPS flash parts and compatibility with DL380P G8 servers and HP controllers etc., the server specs are decent and I believe they should run whatever I go with but non-HP parts may be problematic.
I'm looking for thoughts and opinions on the best course to take for the most cost effective and best performing solution for virtualization and network storage, using what I already have around and hopefully a few small purchases. The hardware I currently have for the build:
(1) HP DL380P G8 8SFF: (2) E5-2690v2 proc, 128GB RAM, P420i 1GB FBWC, 8x smart trays, 2 - 10G SFP+ flexLOM
(1) HP DL380P G8 8SFF: (2) E5-2690v1 proc, 98GB RAM, P420i 1GB FBWC, 8x smart trays, 4 - 1G flexLOM
(1) SuperMicro 45 Bay LFF 4U server chassis, 60+ SM trays
(1) generic 60 bay top load dual controller SAN (I think it is a rebranded HGST unit from a niche SAN vendor) - one controller installed: E5-2660v1 64GB Ram, 2 SFP+ 10G ports and all trays installed (empty)
(1) Mellanox IS5035 36 port QDR managed switch
(10) Mellanox Connect-X3 Pro VPI adapters (dual port)
(50) HGST 3TB 6g 7.2k drives (new) and 16 used in current array
(26) Seagate 300GB 15k.7 cheetahs, dont plan to use these but they are available
(2) Areca 1882ix adapter
(1) Areca 1883ix adapter
(4) 500GB 860 EVO SSD
I still have not purchased the RAID/HBA's, additional server RAM, SAS SSD's, Optane possibilities, > 6TB spindles, etc.
I was thinking about FreeNAS, StarWind, CEPH, Proxmox, VMWare, OenStack etc. etc., I didnt even realize there were so many options until I started planning virt/SAN and looking at what
I had around. I've read a lot of reviews, guides etc. and everything has pros/cons but I am having trouble choosing a tack to take, and would like a little real world feedback and opinions. OR, even better options I don't know about yet. This will be a primarily home solution but I am also planning to mirror my ISP infrastructure to test/play with configurations, new software etc.
The minimum total IOPs needed are ~ 40k-60k r&w based on an existing C7000 & tiered SAN cluster I am planning to replace or upgrade (current solution is maxed around 20k-30k depending on flash hit rate - the spinning disks are only good for ~2.5k). Initially I was going to use the P822 smart cache feature to create tiered storage using CentOS as the host OS for a home brew iSCSI storage solution on the E5-v1 server and use the other G8 for a virtualization host sharing the storage server with the other network hosts.
I think I've reached a paralysis point after going through all of the options and available solutions and wanting to try everything under the sun. After some research I am wary of the affordable high IOPS flash parts and compatibility with DL380P G8 servers and HP controllers etc., the server specs are decent and I believe they should run whatever I go with but non-HP parts may be problematic.
I'm looking for thoughts and opinions on the best course to take for the most cost effective and best performing solution for virtualization and network storage, using what I already have around and hopefully a few small purchases. The hardware I currently have for the build:
(1) HP DL380P G8 8SFF: (2) E5-2690v2 proc, 128GB RAM, P420i 1GB FBWC, 8x smart trays, 2 - 10G SFP+ flexLOM
(1) HP DL380P G8 8SFF: (2) E5-2690v1 proc, 98GB RAM, P420i 1GB FBWC, 8x smart trays, 4 - 1G flexLOM
(1) SuperMicro 45 Bay LFF 4U server chassis, 60+ SM trays
(1) generic 60 bay top load dual controller SAN (I think it is a rebranded HGST unit from a niche SAN vendor) - one controller installed: E5-2660v1 64GB Ram, 2 SFP+ 10G ports and all trays installed (empty)
(1) Mellanox IS5035 36 port QDR managed switch
(10) Mellanox Connect-X3 Pro VPI adapters (dual port)
(50) HGST 3TB 6g 7.2k drives (new) and 16 used in current array
(26) Seagate 300GB 15k.7 cheetahs, dont plan to use these but they are available
(2) Areca 1882ix adapter
(1) Areca 1883ix adapter
(4) 500GB 860 EVO SSD
I still have not purchased the RAID/HBA's, additional server RAM, SAS SSD's, Optane possibilities, > 6TB spindles, etc.
I was thinking about FreeNAS, StarWind, CEPH, Proxmox, VMWare, OenStack etc. etc., I didnt even realize there were so many options until I started planning virt/SAN and looking at what
I had around. I've read a lot of reviews, guides etc. and everything has pros/cons but I am having trouble choosing a tack to take, and would like a little real world feedback and opinions. OR, even better options I don't know about yet. This will be a primarily home solution but I am also planning to mirror my ISP infrastructure to test/play with configurations, new software etc.