Well I have been a little bit busy over in the Hyper-V section of the forum trying to sort out some weird network slowdowns. In the end we pinned it down to two group policies that get applied to domain controller and do some digital signing of SMB traffic. Many thanks to the forum members here for helping me out and replicating the issues on their setups.
I have been doing some tests again with the little Atom D510 system. Added the 7v resistor lead for the 60mm fan and the silicon fan gromit. I thought I would try pulling air into the case and also expelling air from the case.
Pulling air into the case, blowing directly at the heatsink, leaves the temperature at 30C and system temperature at 34C. (Silent running)
Sucking air out of the case, fan right by the heatsink, leaves the temperature at 34C and system temperature at 37C. (A bit noisy)
So pulling air into the case, and silent, it is then
Now with the issue of the main server now having the following roles I need to make a few changes:
- Domain Controller
- DHCP Server
- DNS Server
- Hyper-V Host
- File server
- iSCSI storage
This will not be the main Hyper-V server at all, just a few little VMs to keep the network ticking over. Due to the issue with the group policies and digital signing I want to virtualize the domain controller part with the DHCP and DNS into a VM on this box. That is the plan anyway. I have read this is much more safe and secure with Server 2012 R2 than it has ever been before. Any suggestions on this are welcome. By the way the little Atom D510 is going to be used as a second physical domain controller from time to time. I will use it for messing about and testing replication Microsoft Sites stuff etc. Hopefully this will give me a system I can use to recover from, so long as I turn it on once a week or so. I want to try out things like DHCP failover between sites.
Now with that in mind it should then just leave the following roles on the main physical server here:
- Hyper-V Host (Joined to Domain)
- Fileserver
- iSCSI storage
There will later be more dedicated Hyper-V host(s), hopefully in a cluster, connected to this setup that will access the iSCSI storage. So please bear that in mind with the next part. There are two schools of thought now with the introduction of NIC teaming in Server 2012. I have done a couple of little network designs here to show that and make it a bit easier.
First, have a physical NIC, or team for each individual thing. So say: 1 x NIC for management, 2 x NIC for Migration, 3 x NIC for iSCSI and use MPIO.
Second, team all physical NICs together onto the vSwitch and then create vNICs for everything from there. And use some QoS on the vSwitch
Now some of this is all a bit new to me. So iSCSI going direct in the first layout I could implement MPIO. In the second obviously I couldn't as I only have the one vNIC which is supposed to be 10Gbe. Would I need MPIO then? If so then I guess I just add another vNIC and go from there?
I do like the idea of all physical NICs in one team into the vSwitch. The theory is great as you could lose all physical NICs apart from one and all services would still work. Much slower, but they would all still work. This is really what my project here is all about, redundancy and keeping things running in disasters. Plus renewing my lab as well
So thoughts, ideas, experiences please ladies and gentlemen?