I spent the better part of last night going through the infiniband threads here and while I understand the concepts and processes, I am trying to make some heads or tails out of specifically what I would look to purchase in my environment.
The current environment is as follows:
1. Supermicro 36 bay server chassis with an X9DR3-LN4F+-B running as an All-In-One ESX/OI+Napp-IT ZFS server.
2. Dell R710 ESX server
3. Supermicro SC846TQ (Supermicro 24 Bay SATA 4U AMD QC 1 8GHz 8GB H8DME 2 SIM1U SC846TQ Server | eBay) standalone OmniOS/Napp-It ZFS server. This is for storing backups and our PHD Virtual appliances but not running VM storage.
My goals for this year:
Sell off our MD3000i/MD1000 SAN and use the cash to purchase a Dell C6100 to combine the 3 ESX servers. Convert #1 into a pure OmniOS/Napp-IT ZFS server. Keep #3 as is with the exception of re-purposing the 192GB to the C6100 and adding new RAM to #1.
I am looking to direct connect for the moment between #1, #2, and #3 due to the cost of switches. A long-term plan is to integrate a true switch, but we're trying to manage the costs as we go.
With that being said, can I virtualize a subnet manager server or would it have to be a physical server?
Our storage is NFS to ESX and no iSCSI going forward.
I would love to just be able to throw some Mellanox InfiniHost MHGA28-XTC cards for $40 into the servers just to get some high speed bandwith out there, but I'm not sure if our environment and my desires would work with that. That's the reason for the more specific hardware suggestions. Honestly, 10Gb would probably be plenty for us but if going to 40Gb keeps the price within reason ($1000) then I would just rather do that. This is more for vMotion, backups, and for some later plans to do some array replication. If going to a $250 card saves time and headache then that would be money well spent.
I'm not sure if that provides enough detail about the hardware configuration or not. I can provide other information as requested. Any suggestions are appreciated.
The current environment is as follows:
1. Supermicro 36 bay server chassis with an X9DR3-LN4F+-B running as an All-In-One ESX/OI+Napp-IT ZFS server.
2. Dell R710 ESX server
3. Supermicro SC846TQ (Supermicro 24 Bay SATA 4U AMD QC 1 8GHz 8GB H8DME 2 SIM1U SC846TQ Server | eBay) standalone OmniOS/Napp-It ZFS server. This is for storing backups and our PHD Virtual appliances but not running VM storage.
My goals for this year:
Sell off our MD3000i/MD1000 SAN and use the cash to purchase a Dell C6100 to combine the 3 ESX servers. Convert #1 into a pure OmniOS/Napp-IT ZFS server. Keep #3 as is with the exception of re-purposing the 192GB to the C6100 and adding new RAM to #1.
I am looking to direct connect for the moment between #1, #2, and #3 due to the cost of switches. A long-term plan is to integrate a true switch, but we're trying to manage the costs as we go.
With that being said, can I virtualize a subnet manager server or would it have to be a physical server?
Our storage is NFS to ESX and no iSCSI going forward.
I would love to just be able to throw some Mellanox InfiniHost MHGA28-XTC cards for $40 into the servers just to get some high speed bandwith out there, but I'm not sure if our environment and my desires would work with that. That's the reason for the more specific hardware suggestions. Honestly, 10Gb would probably be plenty for us but if going to 40Gb keeps the price within reason ($1000) then I would just rather do that. This is more for vMotion, backups, and for some later plans to do some array replication. If going to a $250 card saves time and headache then that would be money well spent.
I'm not sure if that provides enough detail about the hardware configuration or not. I can provide other information as requested. Any suggestions are appreciated.