VM-to-VM network performance with SR-IOV and eswitch vs. vswitch?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Well the theoretical max bandwidth of the t420 is around 4gig a sec (or the max I/O of two 10 gigabit duplex ports, which it has) due to having a pci-e 2.0 x8 interface. I will test it in about an hour or so.
 
Last edited:

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Ok Good news and bad news - good news is I have two of the VF's in two separate VMs, bad news is they are set up as Fibre Channel and I have exactly 0 experience in using Fibre Channel in linux. Will keep plugging away.

Guessing it has something to do with the drivers so I'm going to try installing some fresh ones.

Working the problem.. Getting there
 
Last edited:

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Working the problem.. Getting there
Ok first learning, Virtual functions 0-3 on Chelsio cards are ethernet vf, the others are other things like fibre channel

Second learning, Ubuntu 14.04.3 doesn't like chelsio drivers, so I'm going to have to update to make that work (YUCK)

Third learning, after modprobe cxgb4 (on ubuntu 15.10) and an attempt at setting a dhcp address, (on port 2 virtual functions, port 1 is connected to the network) there is no ip address.

BUT the card doesn't show up in ifconfig so there might be something I am missing..

Fourth learning, compiling the Chelsio drivers from source is not straightforward on ubuntu, so I have skipped that and am using ubuntu drivers.

Still working it.
 
Last edited:

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
Ok I've now given up on Ubuntu, and will be downloading opensuse soon (can't at the moment as I'm practically at the limit of my downloads for the month) to give it a shot.


Apologies for the delay but I've had enough of dealing with it for today.
 

apnar

Member
Mar 5, 2011
115
23
18
Not directly related, but sort of in the same vane. When I redid my setup recently I realized that the vast majority of intra-box network traffic was storage IO so I changed my design to minimize storage traffic. I moved from an ESXi with storage VM and passthrough to Proxmox with ZFS native to the hypervisor eliminating all my hypervisor to storage VM traffic. I then moved from VMs for many services to Linux containers leveraging bind mounts getting rid of all their NFS traffic to storage VM. Haven't done any performance testing on it, but so far been very happy with the results.
 

joek

New Member
Mar 20, 2016
27
12
3
104
Apologies for the delay but I've had enough of dealing with it for today.
No apologies necessary. You've gone above and beyond. Your efforts have prompted me to look for a cheap card to test with. Thanks again.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hey I also have some extra numbers/ information on SR-IOV v. OVS that I will try getting out this week, but most likely look to the main site on Monday or so (going to be a crazy few days.)
 

joek

New Member
Mar 20, 2016
27
12
3
104
Hey I also have some extra numbers/ information on SR-IOV v. OVS that I will try getting out this week, but most likely look to the main site on Monday or so (going to be a crazy few days.)
That would be great. Looking forward to it!
 

kathampy

New Member
Oct 25, 2017
17
11
3
I have ConnectX-4 100 Gb/s NICs in SR-IOV mode. I can confirm that:
  1. Inter-VM traffic is limited to the physical link speed (the physical switch is only 40 Gb/s in my case).
  2. If the physical link goes down, the virtual interfaces go down as well and inter-VM switching does not work. It does not matter whether you have a switch or a single host on the other end of the cable.
I'm not sure if the embedded switch is being used instead of the hypervisor. There is no obvious way to configure the feature.

It seems I have to use VMXNET3 if I want 100 Gb/s iSCSI between my VMs since the physical switch port is only 40 Gb/s.
 
Last edited: