Windows Server 2012 R2 + Hyper-V and 3 x 1Gb NICS in a team with poor network speeds

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi all

I am building my server over in the DIY server build area. I have started using Windows Server 2012 R2 as the main OS, and rightly or wrongly also installed Hyper-V on it. Basically I am using 3 x 1Gb Intel NICs in a team. The team then attaches to the virtual switch for Hyper-V. But I don't seem to be able to get any speeds between the VMs of more than 100MB/s.

Quick overview of system.

It is a domain controller running DNS and DHCP as well so probably breaking all the rules!

3 x RAID arrays.
- 1 for OS (RAID5 4 x SSD)
- 1 for Data (RAID6 8 x 1TB),
- 1 for VMs (RAID6 8 x 1TB)

Supermicro X9SCM-iiF motherboard with two onboard Intel 82574L Gb NICs on board
Also 1 x quad port Intel Pro 1000 PT network card

Everything is using standard Windows 2012 R2 drivers. I then created an Ethernet Team using the "Windows teaming". This uses one of the onboard NICs and two from the Quad port PT card. This is set to Switch Dependant and Dynamic as they seemed the best options. I have also created a LAG on my Cisco SG300-28 switch that all 3 plug into. It is a static LAG and not using LACP.

Now in the Host OS I can copy between the two RAID6 arrays and get speeds of 500MB/s+. So that is great, the disk speed is OK.

I have a lot of experience in the past using LAG on physical servers and getting better transfer speeds between them. Always used to use either the Intel ProSET software or the Broadcom equivalent to get the job done. Used the Windows one here as I thought it would be OK (or maybe not)

Now I am aware that virtualization is a bit more complex and I think I am just confusing myself now with RSS, vRSS, and VMQ. So want to try and get some help from here please.

I have two VMs set up on the server.
- Windows 2012 R2 server
- Windows 7 Professional SP1

Now if I copy about 14GB, several large ISO files, of files between them then I get transfer rates of almost almost 1Gb, or about 100MB+. Very nice, but I would expect more as the virtual NICs and the virtual switch all state they are 10Gb. I have the team of 3 x 1GB cards going into the virtual switch, but it seems like I only ever get the speed of one of them.

Now is gets a bit more complex. The Host OS shares the team as well and connects through a virtual NIC to the virtual switch. Again the OS says this is 10Gb. You are probably all cringing now and wondering why I have done this like this. Sorry if it is wrong.

If I try to copy the 14GB from the Windows 7 VM to a share on the Host OS then I get speeds of between 20MB/s - 60MB/s if I am lucky. Also when I do this copy to the host OS the CPU on the VM maxes out at 100% for the duration of the copy. I tested this in the Windows 2012 Server VM as well and the same thing happens with CPU usage and network speed.

Copying between the VMs directly I don't see the 100% continuous CPU usage though, and obviously much faster network speeds.

Clearly something is wrong somewhere, but I am scratching my head as to what. The whole point of teaming networks cards is to get greater throughput and resilience. I'm stumped as to why I can't get more than the speed of a single 1Gb card.

I did find this article regarding VMQ and it seems to be my issue, but I am not sure. Any suggestions welcome please? happy to check settings and post pictures of findings etc.

VMQ Deep Dive, 1 of 3 - Microsoft Enterprise Networking Team - Site Home - TechNet Blogs

When I run the powershell script on here is shows my Ethernet Team is capable of VMQ but it is not enabled, yet if I look in the team NIC settings it says it is.

I am confused.com :confused:
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Very quickly, and addressing only part of your problem: Try eliminating all teaming and just giving every VM a single basic network connection. What happens to performance? If it's better, consider upgrading the Win7 box to Win8 to get SMB3, and then try adding additional non-teamed network connections as needed, letting SMB3 spread the workload over them as it sees fit.
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi dba

Thanks for the quick response. I have drawn a diagram as I think it should make it easier to understand.



So now you can see the 3 x 1Gbe Intel NICs creating the Windows Server 2012 R2 Team, this is set as Switch Dependent and as Static LAG on the Cisco Switch. This team then links directly to the Windows 2012 Server R2 Hyper-V virtual switch.

There are 3 x 10Gbe vNICS on the virtual switch.
- First goes to the Management OS where there is a network share available on the server.
- Second goes to the Windows 7 VM
- Third goes to the Windows Server 2012 R2 VM

Copying between the two VMs gives speeds of 100MB/s or greater. And the CPU does get busy but never hits 100%

Copying from either of the VMs to the network share on the host gives speeds between 20MB/s-60MB/s and during the entire copy process the CPU on the VM stays at 100%

Copying from the network share to an external device such as the laptop is even slower. It starts between 20MB/s-40MB/s second and then very slowly drops down to as low as 8MB/s.

Something is very clearly wrong somewhere along the line, but I am not sure where. I am guessing it is the Windows Server 2012 R2 team.

I think I may start from the bottom up again.

1) Remove Hyper-V, Windows 2012 R2 NIC Team
2) Using 1Gbe connection to the switch test copying speeds to the laptop physical client.
3) Create the NIC Team again, re-test the copying speeds to the laptop physical client.
4) Add Hyper-V and the virtual Switch, re-test the copying speeds to the laptop client again.

If the problem is repeated then I may try the Intel ProSET software instead of the Windows NIC Teaming.
 

PigLover

Moderator
Jan 26, 2011
3,189
1,548
113
I think your problem is that the 3-NIC team is operating as a LAG group. Unless you've done something special, Windows will use layer-2 hash (MAC hash) to assign traffic into the LAG and your transfers between your filer-server VM and the workstation will all hash to the same link all the time - with no traffic on the other link.

You might try this: instead of building a lag of 3 NICs on a single switch, present the NICs as a single interface each on three separate vSwitches. Make sure all three are shared with the Hyper-V host so that your file share sees all three NICs. And give each VM a vNIC on all three vSwitches. Doing this should allow SMB3 Multipath to kick in and you'll get data transferred on all three NICs.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Thanks very much. I appreciate your assistance.

I have discovered something. There was a new driver available for the Intel Pro/1000 PT Quad Port Server Adapter that I am using on Windows update.

Old Driver
Manufacturer: Microsoft
Version: 9.13.41.3
Date: 29/02/2011

New Driver
Manufacturer: Intel
Version: 9.15.11.0
Date: 14/10/2011

I have updated to this driver and rebooted the server to let everything "reset" so to speak. I will try some tests again now.

Just out of interest I tried copying the files across between the two physical machines on the network. This is a Sony Vaio Laptop and an Apple Mac Mini, both running Windows 7. They have SSDs in each. Copying speeds started at 105MB/s and then dropped to a constant 85MB/s. So both of these are OK.

I'll run some tests again now on the server and see what I find. I still think there is something wrong with the NIC team though as a starting point.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi PigLover

Thanks for the response. That actually sounds like something I was reading to do with VMQ as well. But it was all starting to make my head spin a bit trying to do it all in a hurry. I do like the sound of the 3 vSwitches though.

I am still puzzled why writing from the share to the laptop is so slow though. Just going to test that again now after the new drivers install and the reboot.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
OK, a bit more of an update.

I have removed Hyper-V from the server and broken the NIC Team.

I have just run the server from one NIC at a time to a normal switch port. Each time I copy the files from Windows Server 2012 R2 to the Windows 7 laptop I get speeds of 30MB/s-40MB/s. But if I copy between the Windows 7 machines then I get speeds of 90MB/s.

So the switch configuration must be OK. The two workstations are clearly OK.

I have also tried the switchport and NIC settings at Auto and forced both to 1000/Full as well. Still no joy.

At least I know now it wasn't Hyper-V causing the issue. More something to do with Windows Server 2012 R2 networking in some way.

I always seem to manage to find problems :confused: :rolleyes:
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
It is 1:20am here in the UK now and after what I have found so far on this I think it's time to call it a night. :(

First thing, if I copy using Windows Explorer on the Windows 2012 R2 server to a share on the Windows 7 PCs then I get pretty much double the throughput as if I copy using the Windows Explorer on the Windows 7 PC from a share on the Windows 2012 Server R2 system.

With the Apple Mac Mini,
Pull from Windows Server 2012 R2 share to Windows 7 ~ 55MB/s
Push to a Windows 7 share from Windows Server 2012 R2 ~ 105MB/s

Playing around with Intel Drivers and then looking for the latest ProSET 18.8 drivers I then found that when you install them they fail to recognise your cards. In particular the Intel 82574L which is in the Supermicro X9SCM-IIF motherboard that I have. Here is an interesting link to Intel not supporting drivers for that NIC on Windows Server 2012 R2.

MPECS Inc. Blog: Windows Server 2012 R2: Intel PROSet Install Error: No Intel Adapters Present

Yet even the Supermicro site gives you the ProSET 18.8 drivers to download if you look on there for Windows Server 2012 R2.

I am so annoyed as it looks like I am stuck with the Windows Server teaming which seems, at first glance, a bit pants compared to the Intel ProSET stuff. This is the rambling opinion of a tired and fed up man though. Spent a lot of dosh on that Supermicro board and also the Intel Pro/1000 PT Quad port cards that I have too.

With the Windows Server 2012 R2 slow shares issue I did find this document talking about SMB signing on domain controllers. Now as my big box is also a domain controller then I wonder if this may be the issue. Getting too tired to test it now though.

http://jrs-s.net/2013/04/15/windows-server-2012-slow-networksmbcifs-problem/

Maybe I will have to virtualize the DC within Hyper-V on the box and make it auto start when the system boots. But will the box itself then join the domain OK as it starts up..... chicken and egg scenario there. I seem to think I read somewhere that you can do this now with Server 2012 though, however it was a pain before with 2008. Or maybe now I am dreaming..... :confused:
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
OK so now I understand your setup, and something does seem wrong about your performance results. I'll try a similar copy and report back.
So with an "all defaults" Hyper-V networking setup on a c6100 node, and with all SSD disks, I quickly ran through the following tests:

1) Copy a 10GB file from VM1 to VM2 - 341MB/s - 20% CPU utilization
2) Pull a 10GB file from the host to VM1 - 67MB/s - 100%CPU
3) Push a 10GB file from VM1 to the host - 73MB/s - 100% CPU

So passing files from VM to VM is reasonably fast, and does in fact behave as though it were a 10GbE network, albeit not a very efficient 10GbE network. CPU utilization in the single-core VMs is high, but not outrageous. On the other hand, pass a file to or from the host and a VM, and the CPU gets pegged and you get performance like a rather bad 1GbE network.

I have not spent any time trying to tune gigabit networking on Hyper-V, but this seems like rather poor performance. Does anyone know what's going on here?

Now here's the odd part: I can easily make it so that the transfer slows to ~15 MB/s and here's how: Have another VM with the same IP address running elsewhere on the network.

And a comparison: On a Macbook Pro laptop running VMWare fusion, copying a file from a VM to the host runs at ~103MB/s.
 
Last edited:

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi dba

Thank you so much for doing this and proving that I'm not going mad. Really thank you so much as I was tearing my hair out :)

I am still experiencing slowdowns now even after removing hyper V completely as I listed above. As you can see from my throughput speeds from the MacMini physical machine to the Host. So strange that pushing from the host OS, Windows Server 2012 R2, to the MacMini (Windows 7 Professional) goes twice as fast as pulling from the Host Server to the MacMini. It is very strange indeed.

After reading what I did about the Intel NIC I wonder if it is something to do with that?

Just to rule out things what NICs are you running there and what switch please?

I am using the Supermicro X9SCM-iiF motherboard with onboard Intel 82574L NICs and an Intel Pro1000 PT Quad Port Server Card. Although I have tried both NICs and am just using 1 x 1Gbe NIC at the moment to test transfer speeds. I get the same issues from either NIC. I am also using a Cisco SG300-28 switch. I also have the host server OS set up as a DC with DHCP and DNS.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Hi dba

Thank you so much for doing this and proving that I'm not going mad. Really thank you so much as I was tearing my hair out :)

I am still experiencing slowdowns now even after removing hyper V completely as I listed above. As you can see from my throughput speeds from the MacMini physical machine to the Host. So strange that pushing from the host OS, Windows Server 2012 R2, to the MacMini (Windows 7 Professional) goes twice as fast as pulling from the Host Server to the MacMini. It is very strange indeed.

After reading what I did about the Intel NIC I wonder if it is something to do with that?

Just to rule out things what NICs are you running there and what switch please?

I am using the Supermicro X9SCM-iiF motherboard with onboard Intel 82574L NICs and an Intel Pro1000 PT Quad Port Server Card. Although I have tried both NICs and am just using 1 x 1Gbe NIC at the moment to test transfer speeds. I get the same issues from either NIC. I am also using a Cisco SG300-28 switch. I also have the host server OS set up as a DC with DHCP and DNS.
Dell c6100 with Intel 82576 network chips. SMC Tigerstack switch.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Thanks dba

So that rules out my network cards and the switch as well.

Just found some interesting stuff about the Intel 82574L too, a bit worrying as well. Don't know if it is old news or not.

Intel: 'Packet of death' problem 'isolated to a specific manufacturer' | ZDNet

https://communities.intel.com/community/wired/blog/2013/02/07/intel-82574l-gigabit-ethernet-controller-statement

Worrying bit is the stuff in the Intel community link for me. Clearly I have a Supermicro board with these dual NICs. I also have 3 of the mentioned Intel DQ77MK boards sitting in boxes ready to build my Virtual server cluster. Is anyone else using these boards/cardsand having the issues please?
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Another data point, based on the one VM that I currently have available.

The "host" box is a SM X9SCL, i3-3210 (dual core IB), 16GB Kingston 1333 ECC running server 2012R2.
The onboard LAN ports (85274L and 82579LM) were not used.
Intel VT Quad port provides 4 network ports.
Starwind Ramdisk used to create an 8GB RAMdisk, setup as shared.

Network details for the host box as follows ..



The "test" box is a SM X9SCM, Xeon E3-1230v2 (Quad core), 32GB Samsung 1333 ECC running server 2012R2 with Hyper-V enabled.
The onboard LAN ports (85274L and 82579LM) were assigned to Hyper-V for virtual switches.
Windows 8.1 setup as a VM, assigned 2 virtual CPU's and 4G RAM.
Both "virtual" network adapters assigned to windows 8 VM
Intel VT Quad port provides 4 network ports for 2012R2.

Network details for the test box as follows ..



No LAG's of any kind were used ... all connections are made through an HP 1810-24G.

Test #1 was to run IOMeter from the "test" 2012R2 OS. I would expect all 4 ports to be used on both sides by the SMB protocol. Sure enough ...



Test #2 was to run IOMeter from the Windows 8 VM running under Hyper-V on the "test" 2012R2 OS ...



So it certainly looks as if SMB is operational and using the 2 virtual ports available. Some overhead is apparent as I would have expected >200 MB/s from 2 gigabit ports, but not at all bad. CPU utilization was reported as 50%.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Well, I did a little more testing and now I'm a little puzzled ...

Essentially I reversed the roles of the ports setup on the "test" machine. So now the "onboard" LAN ports (85274L and 82579LM) were assigned for 2012R2 and the 4 VT quad ports were assigned to Hyper-V for virtual switches, as follows ...



Test #3 was to run IOMeter from the "test" 2012R2 OS. I would expect both ports to be used by the SMB protocol and see >200MB/s, but this was not the case ...



Hmmm ...

To test that something wasn't wrong with the physical connections I tested the 85274L and 82579LM individually ( by disabling the other ) and both seem to work ok.

I remember reading something about ports needing to be RSS capable for SMB "multi port" operation, and sure enough there's an entry on Jose Baretto's Blog that talks about that.

Running the Get-SmbServerNetworkInterface powershell command shows the following ..



192.168.1.117 is the IP address of the 82574L port, and 192.168.1.108 is the IP address of the 82579LM port, which is reported as non RSS capable. Is this why SMB didn't enable multi-port operation ? Probably a bigger question is why did it appear to work when both ports were assigned to the win 8.1 VM ??

Anyway ...

Test #4 was to run IOMeter from the Windows 8 VM running under Hyper-V on the "test" 2012R2 OS, but now with the 4 "virtual" VT ports assigned to it. Just to throw another variable in the mix, this time I created the virtual switches with SR-IOV enabled (forgot to do that the first time around !!) ...



So that's 453 Mb/s from a windows 8.1 VM with 4 virtual NICs compared to 467 MB/s on 2012r2 with the same NICs in a "native" mode, with SMB doing it's multi channel magic in the background.

Now, I appear to have hijacked the thread a little and I realize I've not really addressed any of the VM to VM questions ... sorry 'bout that.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
Hi britinpdx

Please don't feel you have hijacked anything. I am very grateful that you have taken some time to help me out here with this. That is why I love the forum.

Just to confirm that I have totally removed Hyper-V from the Server 2012 R2 machine now and am right back to basics investigating the speed differences between push and pulling data betweens physical clients and the server. This is still showing slowdowns so I think rules out Hyper-V a bit for now. But I am running the Server 2012 R2 machine as a DC as well with DHCP and DNS.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
A quick update with a few more tests. I "cloned" the windows 8.1 VM to quickly get 2 VM's running.

It can get confusing to follow along, so as a recap the hardware and terminology is as follows ..

The "host" box is a SM X9SCL, i3-3210 (dual core IB), 16GB Kingston 1333 ECC running server 2012R2.
The onboard LAN ports (85274L and 82579LM) were not used.
Intel VT Quad port provides 4 network ports.
Starwind Ramdisk used to create an 8GB RAMdisk, setup as shared.

The "test" box is a SM X9SCM, Xeon E3-1230v2 (Quad core), 32GB Samsung 1333 ECC running server 2012R2 with Hyper-V enabled.
The onboard LAN ports (85274L and 82579LM) were assigned to the 2012R2 OS.
Windows 8.1 setup as VM1, assigned 2 virtual CPU's and 8G RAM.
Windows 8.1 VM1 cloned to VM2, assigned 2 virtual CPU's and 4G RAM (not sure why I didn't match up to VM1, but oh well ...)
Intel VT Quad port assigned to Hyper-V switches, created with SR-IOV enabled.
VM1 assigned vNIC1 and vNIC2
VM2 assigned vNIC3 and vNIC4
2012R2 OS setup on a 120GB Sandisk Extreme SSD
VM1 and VM2 setup on a Crucial M500 240GB SSD

Test #5 Running IOMeter on VM1 with the "host" RamDisk as the target get the following results ..



Test #6 Running IOMeter on VM2 with the "host" RamDisk as the target get the following results ..



I'm not sure why performance on VM2 is better with less CPU usage. None the less, performance from VM to remote host seems good.

How about the other direction, running IOMeter on the "host" box and targeting a share on each of the VM's ...

Test #7 Running IOMeter on the "host" with VM1 as the target gets the following results ..



Test #8 Running IOMeter on the "host" with VM2 as the target gets the following results ..



And I finally get to run a VM to VM test ....

Test #9 Running IOMeter on VM1 with VM2 as the target gets the following results ..



So i think that this now shows that "host to VM", "VM to host" and "VM to VM" can achieve reasonable bandwidth.

BTW, the key for me keeping track of all of this was the color coding on the desktops.
host = bright green
test = purple
VM1 = pink
VM2 = olive green

I think that the Server 2012 implementation of SMB3.0 is simply brilliant, and as long as you meet it's rules of usage, it performs all of it's magic quietly in the background without the need for all that LAG setup. All of these tests were performed without any LAG or LACP setup on the switch or the boxes.
 

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
OK, I have had some time today to go through the settings on this blog post that I found before

http://jrs-s.net/2013/04/15/windows-server-2012-slow-networksmbcifs-problem/

I have changed the two policy settings for domain controllers, done a "gpupdate /force" on the server, and just for good measure a reboot as well.

Now I can safely say I think this was at the root of the problem, but not necessarily the whole cause. I can now push and pull files between the Windows 2012 R2 server and the Windows 7 MacMini client at sustained speeds of 100MB/s+ throughout the entire copy process. I have also noticed that now the CPU usage on both the server and the client are massively lower too during the copy process. Client was maxing out at 100% before like the VMs would when copying to the Host OS share.

dba and britinpdx would you mind letting me know if your Windows Server 2012 R2 Hyper-V host server was also a domain controller when you were running the tests please? And if so do you see an improvement if you make the changes I made here? Thanks again for taking the time to help and your posts so far

So..... now two things spring to mind

1) Obviously I don't want to leave those security settings disabled. I wonder is there a workaround for this that anyone knows of? The other thought is can I have the physical server as my Windows Server 2012 R2 Hyper-V connected to the domain, but with the domain controller virtualized within that Hyper-V. I seem to recall seeing this in the past on a setup someone did with Hyper-V 2008 R2. I am not sure quite how the host would react starting up without a DC to connect to, unless I can somehow start Hyper-V and the DC VM before the the host starts complaining it cannot find it's domain. Any thoughts?

2) I am also wondering now about the NIC teaming that I had set up before and why if I have a potential 3Gbe throughput I was only getting 1Gbe speeds between the VMs. You guys have all made some very valid comments about the teaming and the way RSS works with the new SMB. I need to re-visit the documents again I think and have a look. I will come back to this later, with some more diagrams (they are easier to follow) about possible teaming setups for best throughput etc.

I need to solve point 1 first thought I think. That way I am building it all up as a working config from the ground up.

Thanks again to everyone so far for all your great insight, and especially taking the time to try and recreate this in your labs too.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
OK, I have had some time today to go through the settings on this blog post that I found before

http://jrs-s.net/2013/04/15/windows-server-2012-slow-networksmbcifs-problem/

I have changed the two policy settings for domain controllers, done a "gpupdate /force" on the server, and just for good measure a reboot as well.

Now I can safely say I think this was at the root of the problem, but not necessarily the whole cause. I can now push and pull files between the Windows 2012 R2 server and the Windows 7 MacMini client at sustained speeds of 100MB/s+ throughout the entire copy process. I have also noticed that now the CPU usage on both the server and the client are massively lower too during the copy process. Client was maxing out at 100% before like the VMs would when copying to the Host OS share.

dba and britinpdx would you mind letting me know if your Windows Server 2012 R2 Hyper-V host server was also a domain controller when you were running the tests please? And if so do you see an improvement if you make the changes I made here? Thanks again for taking the time to help and your posts so far

So..... now two things spring to mind

1) Obviously I don't want to leave those security settings disabled. I wonder is there a workaround for this that anyone knows of? The other thought is can I have the physical server as my Windows Server 2012 R2 Hyper-V connected to the domain, but with the domain controller virtualized within that Hyper-V. I seem to recall seeing this in the past on a setup someone did with Hyper-V 2008 R2. I am not sure quite how the host would react starting up without a DC to connect to, unless I can somehow start Hyper-V and the DC VM before the the host starts complaining it cannot find it's domain. Any thoughts?

2) I am also wondering now about the NIC teaming that I had set up before and why if I have a potential 3Gbe throughput I was only getting 1Gbe speeds between the VMs. You guys have all made some very valid comments about the teaming and the way RSS works with the new SMB. I need to re-visit the documents again I think and have a look. I will come back to this later, with some more diagrams (they are easier to follow) about possible teaming setups for best throughput etc.

I need to solve point 1 first thought I think. That way I am building it all up as a working config from the ground up.

Thanks again to everyone so far for all your great insight, and especially taking the time to try and recreate this in your labs too.
Excellent find. My Hyper-V host was/is a domain controller. I don't transfer files to the host often enough to warrant making the change you suggest, but I'll definitely keep it in my notes for the future.