So I haven't gotten around to posting any details on the SAN / infrastructure upgrade project I've been working on. All the hardware is now in (Supermicro substituted two sets of parts adding to the delay, and I had to wait quite awhile for my low profile Infiniband HBA brackets - but that's another story) and I'm now in the midst of racking (actually racking is done) and cabling. And let me apologize for the somewhat long post.
Are there any Hyper-V experts lurking around to help me revise/tune my network design for a Hyper-V Cluster? I have read the following and still undecided on how best to proceed:
Hyper-V : Network Design, Configuration and Prioritization : Guidance - TechNet Articles - United States (English) - TechNet Wiki (I realize this is not for Hyper-V 2012 but the requirement is the same)
10Gbps | Working Hard In IT (4 part series on 10GbE and Hyper-V)
The diagram (connectivity per Hyper-V host):
Notes and environment details:
- I only have 21 ports on the access panel in the rack for RJ-45 connections (which can be patched to my stack of 3750s). I'd like to use only as many RJ-45 ports as necessary.
- The PowerConnect 8024F have 4 combo ports to support both SFP+ and 10GBase-T (so 8 in total)
- Both 10GbE media types are Intel cards, so are SR-IOV capable
- Cluster of 3 Hyper-V hosts for now, may expand later
- Used for TFS Lab Management (lots of VMs and most likely many VM created/destroyed)
- Blue connectivity boxes are done - reds are the ones I have questions about
- I'd like to only use one 10GbE combo port on the 8024F per host
- SAN traffic will stay segregated on the Infiniband side
Questions:
- What type of traffic does the Hyper-V management contain? If I need to deploy lots of VMs is 1GbE enough?
- Live Migration traffic seems to be dormant most of the time. Can I just piggyback on the teamed VM Network trunk and assign an IP address with a private native VLAN? Partition using SR-IOV?
- Or an alternative is to put Live Migration on its own private VLAN on a X540 10GbE port.
- I've put the Cluster Heartbeat (failover) and CSV together. I understand that Redirected I/O could be important. Is 1GbE enough? Maybe put this in a partition on the VM Network team?
I know there are a lot of options and I'm going slightly crazy thinking of them. I just need some injection of sanity by someone who's been through it.
Are there any Hyper-V experts lurking around to help me revise/tune my network design for a Hyper-V Cluster? I have read the following and still undecided on how best to proceed:
Hyper-V : Network Design, Configuration and Prioritization : Guidance - TechNet Articles - United States (English) - TechNet Wiki (I realize this is not for Hyper-V 2012 but the requirement is the same)
10Gbps | Working Hard In IT (4 part series on 10GbE and Hyper-V)
The diagram (connectivity per Hyper-V host):
Notes and environment details:
- I only have 21 ports on the access panel in the rack for RJ-45 connections (which can be patched to my stack of 3750s). I'd like to use only as many RJ-45 ports as necessary.
- The PowerConnect 8024F have 4 combo ports to support both SFP+ and 10GBase-T (so 8 in total)
- Both 10GbE media types are Intel cards, so are SR-IOV capable
- Cluster of 3 Hyper-V hosts for now, may expand later
- Used for TFS Lab Management (lots of VMs and most likely many VM created/destroyed)
- Blue connectivity boxes are done - reds are the ones I have questions about
- I'd like to only use one 10GbE combo port on the 8024F per host
- SAN traffic will stay segregated on the Infiniband side
Questions:
- What type of traffic does the Hyper-V management contain? If I need to deploy lots of VMs is 1GbE enough?
- Live Migration traffic seems to be dormant most of the time. Can I just piggyback on the teamed VM Network trunk and assign an IP address with a private native VLAN? Partition using SR-IOV?
- Or an alternative is to put Live Migration on its own private VLAN on a X540 10GbE port.
- I've put the Cluster Heartbeat (failover) and CSV together. I understand that Redirected I/O could be important. Is 1GbE enough? Maybe put this in a partition on the VM Network team?
I know there are a lot of options and I'm going slightly crazy thinking of them. I just need some injection of sanity by someone who's been through it.