Hi Lost-Benji
Thanks for the reply. Input is always welcome but I think you should have read the rest of the thread and checked out the server build thread over in the DIY servers section too.
Just to answer some of your concerns/questions:
The Hyper-V host also being a domain controller
is what actually turned out to be the problem. There is a link above to a blog where it is explained. There are 2 group policies that get implemented by default on domain controllers and involve digitally signing SMB traffic. Once these were disabled the network speed bottlenecks disappeared totally. dba very kindly backed up my findings of slow speeds by replicating it on his Hyper-V host which is also a domain controller as well. britinpdx could not replicate the issue because his system was in a workgroup environment.
The server is a project and proof of concept for me about the new technologies and the way they work.
All the RAID arrays are on their own separate Adaptec hardware RAID controllers. The OS array is on a 6405, and both the VM and Data arrays are on 5805 cards. These are more than capable of dealing with the parity requirements and overheads. I would even doubt with these mechanical drives there would be any difference in speed at all between a RAID10 and a RAID6 array. Quite possibly with the added speeds of SSDs though.
The whole point of the disk copy which clocked at over 500MB/s did actually prove a lot. It was copied from the array where the VM VHD resides to the array where the the data shares reside. This showed that the underlying hardware was actually more than capable of sending this data out at a high enough rate that would saturate the 1Gbe connection. It is all well and good having a 1Gbe NIC that can theoretically transfer at 125MB/s but if you are moving data from an old IDE hard disk that can only max out at 50MB/s then you are never going to achieve full network throughput as the underlying hardware simply cannot supply the data that quickly.
The reason for using 2 ports from the Intel PT Quad port cards and one of the onboard NICs in a team is simple. Redundancy. If the Intel PT card fails I still have some connectivity. If I had two switches in a logical switch stack then I would also spread the connections between the switches for the same reasons. All the speed tests that were done later on were using just 1Gbe NIC and no teaming, so the spread was irrelevant then as I had gone back to basics.
There are clearly several schools of thought now on using NIC teams now in Windows Server 2012 and vNICs on the top of that. You can team everything together on the vSwitch and then have separate vNICs for Host, Management, Hyper-V, iSCSI, etc with a quality of service (QoS) applied to the vSwitch. This solution then means that you can lose all bar one of your physical NICs and you will still have connectivity to every service that you have a vNIC for. It might be a bit slow but you will have connectivity to all services.
There do appear to be various issues with this though surrounding which type of teaming you choose. Also how would MPIO work with iSCSI on the vSwitch and just 1 x 10Gbe vNIC? Do you even need it with a 10Gbe vNIC? Should you just enable more than one of them and do it like that? It is all quite complex when you first read it all and set it up. Add in RSS, vRSS, and VMQ and my brain melted the other day and fell out of my ear.
I have a couple of network designs that I will be posting up in the build section to ask for comments from other users. Please feel free to take a look and add you thoughts. As I said it is a proof of concept for me and all input is welcome