Hi all
I am building my server over in the DIY server build area. I have started using Windows Server 2012 R2 as the main OS, and rightly or wrongly also installed Hyper-V on it. Basically I am using 3 x 1Gb Intel NICs in a team. The team then attaches to the virtual switch for Hyper-V. But I don't seem to be able to get any speeds between the VMs of more than 100MB/s.
Quick overview of system.
It is a domain controller running DNS and DHCP as well so probably breaking all the rules!
3 x RAID arrays.
- 1 for OS (RAID5 4 x SSD)
- 1 for Data (RAID6 8 x 1TB),
- 1 for VMs (RAID6 8 x 1TB)
Supermicro X9SCM-iiF motherboard with two onboard Intel 82574L Gb NICs on board
Also 1 x quad port Intel Pro 1000 PT network card
Everything is using standard Windows 2012 R2 drivers. I then created an Ethernet Team using the "Windows teaming". This uses one of the onboard NICs and two from the Quad port PT card. This is set to Switch Dependant and Dynamic as they seemed the best options. I have also created a LAG on my Cisco SG300-28 switch that all 3 plug into. It is a static LAG and not using LACP.
Now in the Host OS I can copy between the two RAID6 arrays and get speeds of 500MB/s+. So that is great, the disk speed is OK.
I have a lot of experience in the past using LAG on physical servers and getting better transfer speeds between them. Always used to use either the Intel ProSET software or the Broadcom equivalent to get the job done. Used the Windows one here as I thought it would be OK (or maybe not)
Now I am aware that virtualization is a bit more complex and I think I am just confusing myself now with RSS, vRSS, and VMQ. So want to try and get some help from here please.
I have two VMs set up on the server.
- Windows 2012 R2 server
- Windows 7 Professional SP1
Now if I copy about 14GB, several large ISO files, of files between them then I get transfer rates of almost almost 1Gb, or about 100MB+. Very nice, but I would expect more as the virtual NICs and the virtual switch all state they are 10Gb. I have the team of 3 x 1GB cards going into the virtual switch, but it seems like I only ever get the speed of one of them.
Now is gets a bit more complex. The Host OS shares the team as well and connects through a virtual NIC to the virtual switch. Again the OS says this is 10Gb. You are probably all cringing now and wondering why I have done this like this. Sorry if it is wrong.
If I try to copy the 14GB from the Windows 7 VM to a share on the Host OS then I get speeds of between 20MB/s - 60MB/s if I am lucky. Also when I do this copy to the host OS the CPU on the VM maxes out at 100% for the duration of the copy. I tested this in the Windows 2012 Server VM as well and the same thing happens with CPU usage and network speed.
Copying between the VMs directly I don't see the 100% continuous CPU usage though, and obviously much faster network speeds.
Clearly something is wrong somewhere, but I am scratching my head as to what. The whole point of teaming networks cards is to get greater throughput and resilience. I'm stumped as to why I can't get more than the speed of a single 1Gb card.
I did find this article regarding VMQ and it seems to be my issue, but I am not sure. Any suggestions welcome please? happy to check settings and post pictures of findings etc.
VMQ Deep Dive, 1 of 3 - Microsoft Enterprise Networking Team - Site Home - TechNet Blogs
When I run the powershell script on here is shows my Ethernet Team is capable of VMQ but it is not enabled, yet if I look in the team NIC settings it says it is.
I am confused.com
I am building my server over in the DIY server build area. I have started using Windows Server 2012 R2 as the main OS, and rightly or wrongly also installed Hyper-V on it. Basically I am using 3 x 1Gb Intel NICs in a team. The team then attaches to the virtual switch for Hyper-V. But I don't seem to be able to get any speeds between the VMs of more than 100MB/s.
Quick overview of system.
It is a domain controller running DNS and DHCP as well so probably breaking all the rules!
3 x RAID arrays.
- 1 for OS (RAID5 4 x SSD)
- 1 for Data (RAID6 8 x 1TB),
- 1 for VMs (RAID6 8 x 1TB)
Supermicro X9SCM-iiF motherboard with two onboard Intel 82574L Gb NICs on board
Also 1 x quad port Intel Pro 1000 PT network card
Everything is using standard Windows 2012 R2 drivers. I then created an Ethernet Team using the "Windows teaming". This uses one of the onboard NICs and two from the Quad port PT card. This is set to Switch Dependant and Dynamic as they seemed the best options. I have also created a LAG on my Cisco SG300-28 switch that all 3 plug into. It is a static LAG and not using LACP.
Now in the Host OS I can copy between the two RAID6 arrays and get speeds of 500MB/s+. So that is great, the disk speed is OK.
I have a lot of experience in the past using LAG on physical servers and getting better transfer speeds between them. Always used to use either the Intel ProSET software or the Broadcom equivalent to get the job done. Used the Windows one here as I thought it would be OK (or maybe not)
Now I am aware that virtualization is a bit more complex and I think I am just confusing myself now with RSS, vRSS, and VMQ. So want to try and get some help from here please.
I have two VMs set up on the server.
- Windows 2012 R2 server
- Windows 7 Professional SP1
Now if I copy about 14GB, several large ISO files, of files between them then I get transfer rates of almost almost 1Gb, or about 100MB+. Very nice, but I would expect more as the virtual NICs and the virtual switch all state they are 10Gb. I have the team of 3 x 1GB cards going into the virtual switch, but it seems like I only ever get the speed of one of them.
Now is gets a bit more complex. The Host OS shares the team as well and connects through a virtual NIC to the virtual switch. Again the OS says this is 10Gb. You are probably all cringing now and wondering why I have done this like this. Sorry if it is wrong.
If I try to copy the 14GB from the Windows 7 VM to a share on the Host OS then I get speeds of between 20MB/s - 60MB/s if I am lucky. Also when I do this copy to the host OS the CPU on the VM maxes out at 100% for the duration of the copy. I tested this in the Windows 2012 Server VM as well and the same thing happens with CPU usage and network speed.
Copying between the VMs directly I don't see the 100% continuous CPU usage though, and obviously much faster network speeds.
Clearly something is wrong somewhere, but I am scratching my head as to what. The whole point of teaming networks cards is to get greater throughput and resilience. I'm stumped as to why I can't get more than the speed of a single 1Gb card.
I did find this article regarding VMQ and it seems to be my issue, but I am not sure. Any suggestions welcome please? happy to check settings and post pictures of findings etc.
VMQ Deep Dive, 1 of 3 - Microsoft Enterprise Networking Team - Site Home - TechNet Blogs
When I run the powershell script on here is shows my Ethernet Team is capable of VMQ but it is not enabled, yet if I look in the team NIC settings it says it is.
I am confused.com