VMware 6.5 and A2SDi-16C-HLN4F (cluster 2-node)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Craig Thomson

New Member
Mar 5, 2018
18
15
3
Checking in to see how esxi 6.7 changed/fixed things.
Thanks for letting me know ESXi 6.7 had been released. I wasn't aware.

I pulled out the ixgbe drivers from the 6.7 ISO. There is still no reference within to either X553 or the device IDs 15E4 and 15C8. I haven't been able to install 6.7 since I'm still waiting on my motherboard to be delivered. Once it's here I'll do that and see exactly what I get.

On the code merge project, I spent about 8 hours this weekend working my way through some of the header files. I've only merged about 10% of the total code so, as I suspected, this is going to take some time. I still want to do this, it just won't be quick.

The more code I look at, the more I'm thinking it might be easier to do option 2 - try to add the VMware modifications from VMware v4.5.3 into Intel v5.1.3. The reason I'm leaning this way is that so far the total amount of code in the VMware modifications appears to be much less than the X553 specific code. Once I've done the first pass on the merge I'll have a better understanding of the full picture and can then make a determination.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Do you have a power meter ? Mind to use it to measure the system idle in ESX ?

Does ESX still suffer a deficit in power consumption compared to especially windows but also Linux ?
 

Craig Thomson

New Member
Mar 5, 2018
18
15
3
Do you have a power meter ? Mind to use it to measure the system idle in ESX ?

Does ESX still suffer a deficit in power consumption compared to especially windows but also Linux ?
Please start a new thread for your question.

We are discussing ESXi drivers for Intel X553/7 NICs, not power consumption.
 
  • Like
Reactions: lte and omega

omega

New Member
Apr 4, 2018
2
0
1
44
pFsense has updated the version to 2.4.3, making the atom c3000 series compatible.
2.4.3 New Features and Changes - pfSense Documentation
Hardware support for the XG-7100, including:
  • C3000 NIC support (factory installations only)
  • C3000 SoC support (factory installations only)
Unfortunately, it's only with the factory install of pfsense. If you download the ISO/usbstick and install it directly it appears to not have them.

I'm in the same boat as most of you. I bought a SYS-E200-9A, I was able to get around the X553 issues by plugging in a quadport pcie NIC with a pcie extension cable so that I could boot and use the server at home. It's sketchy and I'd really hope to get full support of the X553 for either pfsense or vmware really soon.

Something like this for example worked for me. It's ugly and I have to have the NIC taped to the case (with a foam spacer to stop it from shorting to the case) But it works. It does what I need right now for my home lab.

https://www.amazon.com/PCI-Express-...d=1524656126&sr=8-3&keywords=pcie+extender+4x
 

Marco Neri

New Member
Feb 21, 2018
13
1
1
42
I tried to install VMware 6.7 and I confirm that the support for x553 has not been implemented yet


Let's wait
 

Craig Thomson

New Member
Mar 5, 2018
18
15
3
Just wanted to give you all an update.

I'm still working through the code merge. It's proving harder than I originally thought. It's often difficult to determine whether a difference is a VMware addition, change or subtraction. It's clear some of the original Intel driver core code has been deliberately removed by VMware (I assume because the ESXi kernel doesn't support all features of the driver). The hard part is trying to determine whether those missing parts are required by X553/7 or can be safely left out.

I also found out that in order to compile ESXi drivers you must use the ESXi tool chain, which I also had to compile. That alone took a few weeks to get up and running (because the documentation is contradictory and confusing). As a test run, I used the tool chain to compile the VMware ixgbe v4.5.3 source code. I then successfully tested the compiled driver in ESXi 6.7. So, that at least was encouraging.

I'm now doing the code merge by comparing 2 different versions of the VMware source code against 4 different versions of the Intel source code. It's the only way I can accurately and safely determine what VMware have added, changed or removed. As you can probably imagine, that increases the amount of time it takes to compare and merge.

I'm still determined to finish. I hope you all can bare with me. Thanks for your patience.
 

Craig Thomson

New Member
Mar 5, 2018
18
15
3
Well, it's finally ready for release.

I finished the main code merge about two weeks ago. Since then I've been testing and tweaking to ensure the driver loads and operates properly.

I have named the driver ixgbe_x553_7 to indicate that it's the ixgbe driver but specifically for Intel X553/7 devices. In the attached vib, I've mapped the driver to load only for the device IDs listed below.
Code:
8086:15c2, 8086:15c3, 8086:15c4
8086:15c6, 8086:15c7, 8086:15c8
8086:15ca, 8086:15cc, 8086:15ce
8086:15e4, 8086:15e5

I have tested the driver with ESXi 6.7 on my Supermicro A2SDi-16C-HLN4F motherboard which has 4 x X553 NICs (device ID 8086:15e4). I successfully tested the following configurations:
  • ESXi 6.7
    • As a VMkernel NIC
  • VM CentOS 7.4 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device
  • VM Win 7 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device (device was seen by OS but no Windows driver available)
  • VM Win 10 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device (device was seen by OS but no Windows driver available)
The only thing I could not test was SR-IOV passthrough. I included SR-IOV in my code merge so it should work. Unfortunately, my ESXi license does not support SR-IOV so I was unable to test that feature.

Throughput on the NICs during my testing was between 30 MB/sec and 80 MB/sec (in both directions) but I was using an old 1TB HDD as a datastore which would have negatively affected performance (I didn't have a spare SSD available unfortunately).

You can install the vib as follows (unzip it first):
Code:
root@esxi:~ # esxcli software acceptance set --level=CommunitySupported
root@esxi:~ # esxcli software vib install -v /tmp/net-ixgbe_x553_7-4.5.3-5.x86_64.vib

Sorry this took so long and thank you all for being so patient. :)

Edit: I forgot to mention that I used the ESXi 6.5 tool chain to compile the driver (and it was linked with the 6.5 vmklinux_module.c source) so it will work on 6.5 or higher/later.
 

Attachments

Last edited:

Craig Thomson

New Member
Mar 5, 2018
18
15
3
I Found the post on the VMware forum where an user changed the code ixgbe-5.3.6 linux driver Succesfully.
here is the link to the forum Atom C3758/X553 GbE driver for ESXi 6.5/6.7 |VMware Communities
using iscsi everything seems to work correctly while there is some problem with the virtual switch.

try it too maybe we can solve the problem.
By the way, from what I can tell, the X553 driver provided in the link above was compiled from the stock Intel source code. In other words, it does not contain the VMware code modifications. I suspect this is why users are reporting problems with it.
 
Last edited:

Marco Neri

New Member
Feb 21, 2018
13
1
1
42
Hi Craig, great job, your driver is already in use and working on my environment. I confirm that it works great thank you so much you've solved a problem.

Well, it's finally ready for release.

I finished the main code merge about two weeks ago. Since then I've been testing and tweaking to ensure the driver loads and operates properly.

I have named the driver ixgbe_x553_7 to indicate that it's the ixgbe driver but specifically for Intel X553/7 devices. In the attached vib, I've mapped the driver to load only for the device IDs listed below.
Code:
8086:15c2, 8086:15c3, 8086:15c4
8086:15c6, 8086:15c7, 8086:15c8
8086:15ca, 8086:15cc, 8086:15ce
8086:15e4, 8086:15e5

I have tested the driver with ESXi 6.7 on my Supermicro A2SDi-16C-HLN4F motherboard which has 4 x X553 NICs (device ID 8086:15e4). I successfully tested the following configurations:
  • ESXi 6.7
    • As a VMkernel NIC
  • VM CentOS 7.4 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device
  • VM Win 7 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device (device was seen by OS but no Windows driver available)
  • VM Win 10 x64
    • Standard NIC connected to a virtual switch
    • PCI passthrough device (device was seen by OS but no Windows driver available)
The only thing I could not test was SR-IOV passthrough. I included SR-IOV in my code merge so it should work. Unfortunately, my ESXi license does not support SR-IOV so I was unable to test that feature.

Throughput on the NICs during my testing was between 30 MB/sec and 80 MB/sec (in both directions) but I was using an old 1TB HDD as a datastore which would have negatively affected performance (I didn't have a spare SSD available unfortunately).

You can install the vib as follows (unzip it first):
Code:
root@esxi:~ # esxcli software acceptance set --level=CommunitySupported
root@esxi:~ # esxcli software vib install -v /tmp/net-ixgbe_x553_7-4.5.3-5.x86_64.vib

Sorry this took so long and thank you all for being so patient. :)
 
  • Like
Reactions: Craig Thomson

Craig Thomson

New Member
Mar 5, 2018
18
15
3
Hi Craig, great job, your driver is already in use and working on my environment. I confirm that it works great thank you so much you've solved a problem.
Thank you Marco. That's good to hear.

So far I've only been able to do basic testing of the driver. Hopefully, it will just work fine, but if you discover any problems then please post the info here and I'll look into it. Thanks.
 

Craig Thomson

New Member
Mar 5, 2018
18
15
3
During last week I took my Supermicro A2SDi-16C-HLN4F motherboard (which I'd been using solely for testing while I was doing the driver code merge) and rebuilt it as my production server. This finally allowed me to do some proper performance testing (SSDs at both ends, dedicated test network).

Everything went well and, same as before, I was getting around 80 MB/sec throughput. However, when I took a closer look I noticed some sub-optimal timing values around buffering and flow control.

So, I decided to review the code again and I ended up making a few tweaks. When I tested the updated driver I was able to reach a sustained throughput of 90 MB/sec - a performance increase of 12%.

The updated driver is attached. Hopefully it performs just as well for you. :)
 

Attachments

Scoped

New Member
Apr 14, 2018
2
0
1
42
Thanks Craig for the time you've taken to develop this. I'll be testing it too. will this work on vmware 6.5 or only on 6.7?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
This is really cool what @Craig Thomson has done , perfect for the community to use. Let’s hope soon enough the drivers come native.