10-Gigabit mixed with 1 Gigabit Network on Windows 10

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jtabc

New Member
Jul 31, 2022
17
0
1
I need a file server with a fast connection (random and sustained read/write speeds) to one workstation and a gigabit connection to several other machines. The other machines are render nodes for which a gigabit connection to the file server is sufficient. The workstation needs fast access to the file server for video editing/compositing, and a gigabit connection isn't cutting it. The workstation also needs to access individual render nodes, but a gigabit connection is fine for that.

The reason I'm not using the workstation as the file server is because the workstation has to be restarted frequently, and that causes issues for the render nodes.

Currently, I'm using an HP Slimline 290-p0043w as the file server with a 2TB 970 Evo+ as the working drive being shared with the other machines. The setup looks like this (all connections are one gigabit):

Configuration 1:
1659811554623.png

This is functional, but as I mentioned, the connection between the workstation and the HP server is too slow sometimes. So I bought two Mellanox 10 gigabit SFP+ cards and put those into the workstation and the HP server. Now the configuration looks like this:

Configuration 2:
1659811898623.png

The problem with this approach was that I was not able to get the Workstation to connect to the HP Server at 10-gigabit; it would always use the 1 gigabit connection instead. I am not sure if it is possible to have this setup and force Windows to use the 10 gigabit connection. So then I tried this configuration:

Configuration 3:
1659811656505.png

In configuration 3, in order to get the workstation to be able to connect to both the HP server and the render nodes, I had to "bridge" the ethernet and mellanox connections on the HP server, like this (otherwise, although the Workstation could connect to the HP server at full speeds, it had no way to connect to the render nodes):

1659811858642.png

Unfortunately, creating a bridge resulted in significantly degraded performance. For example:

Sustained Transfer SpeedsBridged VersionDirect Connection to Render Client or HP Server
Workstation to Render Client80 MB/sec105 MB/sec
Workstation to HP Server200 MB/sec800 MB/sec

(These tests were performed transferring a 10 gigabyte file from a WD SN750 NVMe SSD on the workstation to the 970 Evo+ on the HP Server or a SATA3 SSD on the render node). Additionally, the latency to connect from the workstation to the render nodes is a lot worse. A remote desktop connection would take less than 0.1 seconds to start in configuration 1, but under configuration 3 sometimes it can take 1 to 2 seconds.

My last thought would be to try something like this, but I do not have the hardware for it currently:

Configuration 4:
1659812468479.png

I would like to avoid this configuration if possible to minimize the amount of hardware necessary.

My guess is that there is something I'm doing wrong in configurations 2 and/or 3. There must be something I can do to get the full performance without needing a 10 gig switch. Either there might be some way to improve the performance of the network bridge in configuration 3, and if not, there's probably some way to tell Windows to use the 10 gigabit connection rather than the 1 gigabit connection in configuration 2.

Does anybody have any advice on what I should try here? I'm at a bit of a loss right now. I'd appreciate any pointers on how to proceed.
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
I can think of multiple solutions. Like using a different ip network for the 10gbe stuff or mannually setting the metrics for the interfaces.
Personally I would always go for config 1/4 (it's the same topology, just another switch)
 
  • Like
Reactions: prdtabim

jtabc

New Member
Jul 31, 2022
17
0
1
I can think of multiple solutions. Like using a different ip network for the 10gbe stuff or mannually setting the metrics for the interfaces.
Personally I would always go for config 1/4 (it's the same topology, just another switch)
I agree using a 10gig SFP+ switch would be easier but I don't really want to spend $100-$200 on one if I don't really need it.

If I am understanding correctly, using a lower metric in configuration 2 for the 10 gigabit connection than the 1 gigabit connection would cause Windows to prioritize the 10 gigabit connection (rather than prioritizing 1 gigabit as it was when I tested it)?

I'll try that out shortly. I am not experienced with networking, so sorry if it's that simple.
 
Last edited:

prdtabim

Active Member
Jan 29, 2022
170
66
28
I agree using a 10gig SFP+ switch would be easier but I don't really want to spend $100-$200 on one if I don't really need it.

If I am understanding correctly, using a lower metric in configuration 2 for the 10 gigabit connection than the 1 gigabit connection would cause Windows to prioritize the 10 gigabit connection (rather than prioritizing 1 gigabit as it was when I tested it)?

I'll try that out shortly. I am not experienced with networking, so sorry if it's that simple.
If you have multiple gateways or more than one NIC connected to the same network metric will be used to choose the path.
Other very simple solution is use the configuration 2 and a diferent network between the HP server and the workstaion. Since they are the only ones using it there's no alternative routes. Map the drive by ip or declare de ip ( of the exclusive network ) of the hp server in hosts file of the workstation.
 
  • Like
Reactions: kpfleming

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I dealt with this on my home network when I first introduced 10gig. The trick to get SMB to use 10gig when you have gig-e also connected... metrics don't always work (they should). I had to set a firewall rule to block the samba broadcasts on the 1gig link so that it would only see it on the 10gig.
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
Your second config (10GbE direct-connect between NAS and workstation) is just fine. Assign static IPs on both ends in a subnet not overlapping with your LAN subnet. Ensure the NAS services (e.g., NFS, SMB) are listening on all interfaces (which usually is the case by default). From the workstation, access the NAS via its 10GbE IP, not its usual LAN IP. The workstation's default route (e.g. to reach the internet via the LAN router) will still go via the gigabit LAN, as will its connections to the render farm. No metric trickery needed, as the route over the 10GbE link is more specific than the default route over gigabit.
 
  • Like
Reactions: jdnz

jtabc

New Member
Jul 31, 2022
17
0
1
Thanks everyone, this is super helpful. I was able to get configuration 2 to work perfectly by putting the Mellanox cards on a different subnet and lowering the interface metric beneath the Ethernet ports. I'm just curious though: why was the performance using the bridge (configuration 3) so poor? Just for my own understanding.
 

LodeRunner

Active Member
Apr 27, 2019
540
227
43
Thanks everyone, this is super helpful. I was able to get configuration 2 to work perfectly by putting the Mellanox cards on a different subnet and lowering the interface metric beneath the Ethernet ports. I'm just curious though: why was the performance using the bridge (configuration 3) so poor? Just for my own understanding.
Because it's being forked over to the host CPU. Switches are technically hardware bridges, but they have dedicated packet handling processors. You bridge two ports in software at the OS level and performance is going to range from marginally acceptable to utter garbage.