HP t730 Thin Client as an HP Microserver Gen7 Upgrade

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
I got some time (finally after a year or something) to try to play with some of the Mellanox ConnectX-3 i bought for cheap.
On my HP t730, it won't even detect the device on PCIe bus, but i have 2 other Haswell corei i3/i5 systems that do detect it, but unfortunately, i can't pass through the ConnectX-3 to a supported OS to install MLNX_OFED drivers and update firmware etc. On these boxes i get "IOMMU not present". The i5-4590 seems to have Intel VT-d, but i guess my HP Prodesk 600 G1 with it's chipset isn't capable.

All of these 3 machines are running Proxmox 6.x and i'm now planning to install CentOS7 another computer just to flash the cards.


@WANg @arglebargle any idea why the card isn't detected at all by the t730? I obviously have both the standard ConnectX-3 and the Pro VPI (314A-BBCT).

Perhaps cross flashing the cards will help for t730?

EDIT1: Tested with an Intel 350 4 ports device in the PCIe port of t730 and it's detected just fine.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
I got some time (finally after a year or something) to try to play with some of the Mellanox ConnectX-3 i bought for cheap.
On my HP t730, it won't even detect the device on PCIe bus, but i have 2 other Haswell corei i3/i5 systems that do detect it, but unfortunately, i can't pass through the ConnectX-3 to a supported OS to install MLNX_OFED drivers and update firmware etc. On these boxes i get "IOMMU not present". The i5-4590 seems to have Intel VT-d, but i guess my HP Prodesk 600 G1 with it's chipset isn't capable.

All of these 3 machines are running Proxmox 6.x and i'm now planning to install CentOS7 another computer just to flash the cards.


@WANg @arglebargle any idea why the card isn't detected at all by the t730? I obviously have both the standard ConnectX-3 and the Pro VPI (314A-BBCT).

Perhaps cross flashing the cards will help for t730?

EDIT1: Tested with an Intel 350 4 ports device in the PCIe port of t730 and it's detected just fine.
Eh, Which specific Mellanox ConnectX-3 is it and which BIOS version are you on in the machine? Mine is a MCX354A-FCBT on 1.08 and works just fine out of the box with ESXi 6.5U2.
 

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
@WANg, so far i have not been able to install MLNX_OFED and the tools to get details, but it's a MCX314A-BCCT (so it's Neither BCBT nor VPI)
root@pve02:~# lspci -vv| grep Mellanox
01:00.0 Ethernet controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Mellanox Technologies MT27520 Family [ConnectX-3 Pro] (Mellanox Technologies ConnectX-3 Pro Stand-up dual-port 40GbE MCX314A-BCCT)

BIOS:
root@pve3:~# dmidecode -t bios| grep Version
Version: L43 v01.14

I have not tried the non-pro models in the t730, but still i find it strange that it's not detected at all.
 

cromo

Member
Jun 6, 2019
87
18
8
I am configuring pfSense on t730 for the first time ever, moving from an older Linux routing box.

I have a i350-t4 PCIe installed (igb0-3): igb3 is WAN, igb0-2 bridged together for LAN. However, following the official manual for bridging doesn't get me far - I can see the DHCP offers being given to my client:

Sep 28 20:02:19 pfSense dhcpd: DHCPDISCOVER from 64:4b:f0:01:xx:xx (MaokPro134) via bridge0
Sep 28 20:02:19 pfSense dhcpd: DHCPOFFER on 10.0.1.10 to 64:4b:f0:01:xx:xx (MaokPro134) via bridge0


However, tcpdump on the client's interface shows it does not receive any response at all from the router – not even the ARP packets.

So I investigated a bit and:
  • if I take one of the eth interfaces (igb2) out of the bridge and assign it explicitly to LAN, the traffic comes through and DHCP works as expected.
  • if I assign the PFs to 3 separate bridges in proxmox and then bridge those in pfSense, it also works fine.
  • Then if I add the VFs to 3 separate bridges in proxmox even without bridging, they don't work anymore.
I am running out of ideas here. My understanding is that the VFs should work in the underlying OS like a regular NIC, but looks like bridging them either on the hypervisor side or on the guest side doesn't work.

What am I missing here?

PS. All hardware offsetting options are disabled.

EDIT: checked with the OPNsense, same result.
 
Last edited:

cromo

Member
Jun 6, 2019
87
18
8
@arglebargle What's the conclusion on pfsense/FreeBSD VFs? The hardware can do it, but the FreeBSD drivers won't work with VFs?
I believe the VFs work fine. I tested them and everything workes *unless* I bridge them in the VM or *outside* a VM and pass as such.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
@arglebargle What's the conclusion on pfsense/FreeBSD VFs? The hardware can do it, but the FreeBSD drivers won't work with VFs?
Months late reply here, sorry.

I ran into a lot of trouble with passthrough because almost all of the NIC drivers error out or misbehave without explicit host ACS support. The only drivers that worked well without ACS were Mellanox, and the only time drivers didn't cause problems was when I passed Mellanox VFs to a Linux guest that also had recent drivers.

I was able to spin up a guest VM running FreeBSD 12 and find/use Mellanox VFs successfully without host ACS but after shutting the guest down the VF was left in an indeterminate state and couldn't be used until I rebooted the host. pfSense 2.4.x (FreeBSD 11 based) was almost entirely a no-go all around.

I kinda came to the conclusion that my best option for virtualization on the T730 was to just run DPDK/OpenvSwitch and do everything in high performance software.
 

TheJaguar

New Member
Jan 19, 2020
18
1
3
EDIT: found one, so ignore the post.


I am looking for a dual or quad port NIC for my HP T730 thin client box. I would like for it to be power efficient (so i340 or i350, right?), which is around 5W consumption in addition to the 16W that the T730 will consume. My initial use case will be for pfsense, but I do plan on virtualizing it for other functions later.

Please suggest on possible options and/or links (ebay?) where I can get a genuine one for cheap - thanks much!
 
Last edited:

TheJaguar

New Member
Jan 19, 2020
18
1
3
Well, you'll need a custom 6.5/6.7 ISO as the default ESXi ISO does not contain the Realtek NIC drivers embedded on the t730 (unless you bought the Broadcom TG3/Tigon based AT29M2 fiber NIC, in which case it'll work just fine), and something that can make a bootable USB drive out of an ISO.

You'll probably need to grab the Realtek drivers from a site like this, and then download/run the ISO image builder - basically, the strategy here is to add the "community supported" Realtek drivers into the ISO, and disable Secureboot (as it'll kick up the purple screen of death on boot with a custom ESXi ISO) in the t730's BIOS.

After you are done, grab a sufficiently large, sufficiently fast USB thumb drive and something like Rufus/uNetBootin to image in the drive, then see if the machine will boot off it. As for install target, use something like a Sandisk UltraFit USB 3.1 drive, they are cheap even with 64GB capacity, which is good for one of the internal USB ports on the t730.

Oh yeah, just remember that there are no SRIOV support on ESXi for the t730. If you need that for low latency packet flipping for VMs, use Proxmox with the ARIFwd patch instead.

Do you mind posting the custom ESXi image for the T730 with realtek drivers?

EDIT: Actually, let me rephrase the question a bit - I am looking to run pfsense in a VM on my T730. Would proxmox be better or ESXi?
 
Last edited:

vudu

Member
Dec 30, 2017
62
22
8
63
Would proxmox be better or ESXi?
Depends. Personally I would choose Proxmox. All the benefits of opensource without the restrictions of ESXi. And of course ZFS is THE killer feature.

When running PFSense on Proxmox don't forget to disable "Hardware Checksum Offloading"

Good luck!
 

TheJaguar

New Member
Jan 19, 2020
18
1
3
Depends. Personally I would choose Proxmox. All the benefits of opensource without the restrictions of ESXi. And of course ZFS is THE killer feature.

When running PFSense on Proxmox don't forget to disable "Hardware Checksum Offloading"

Good luck!
Got it - thanks much. Is SRIOV possible on proxmox? If it is, this should enable passthrough and fast packet switching is what I have read. Is that accurate? Without this, will the rig still be capable of handling 1 gig ethernet. I plan on having a couple of VM's - one for pfsense, one with dockers for sonarr, etc, and another one that will run Windows possibly. Should I run openvpn within the pfsense VM or as a separate VM?
 

vudu

Member
Dec 30, 2017
62
22
8
63
You're welcome. SRV-IO on Proxmox, doable with caveats. Haven't tested OVPN on PFSense on Proxmox to extremes but suspect it will work with right hardware. Not sure what that is though. Same with your other requirements. I'd start with the T730 and scale up as required.

I run PFSense on T610 Plus and T730 natively as well as some virtualised PFSense on HP dual 6 core Gen7's with great results (NVME's, SSD's lotsa RAM and high spec CPU's). All fairly cheap and accessible hardware platforms. What I like most about VM's on Proxmox with ZFS is the ability upgrade the underlying hardware using ZFS send/recv (Sanoid/Syncoid) between devices with minimal downtime.

PFSense can be migrated across hardware platforms natively via the built in backup/restore function, paying attention to interface assignment. Having spare hardware and a test environment at your disposal helps a lot and is something I would recommend.
 
  • Like
Reactions: TheJaguar

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Got it - thanks much. Is SRIOV possible on proxmox? If it is, this should enable passthrough and fast packet switching is what I have read. Is that accurate? Without this, will the rig still be capable of handling 1 gig ethernet. I plan on having a couple of VM's - one for pfsense, one with dockers for sonarr, etc, and another one that will run Windows possibly. Should I run openvpn within the pfsense VM or as a separate VM?
Proxmox will do SR-IOV but you won't have a ton of success with the actual hardware because the T730 doesn't support PCIe ACS. You can kind of hack it to work but various drivers will offer functionality between "not working at all" and "kind of broken" without ACS. You're probably best off using openvswitch and dpdk or just accepting the overhead from standard virtualization.
 
  • Like
Reactions: TheJaguar

TheJaguar

New Member
Jan 19, 2020
18
1
3
You're welcome. SRV-IO on Proxmox, doable with caveats. Haven't tested OVPN on PFSense on Proxmox to extremes but suspect it will work with right hardware. Not sure what that is though. Same with your other requirements. I'd start with the T730 and scale up as required.

I run PFSense on T610 Plus and T730 natively as well as some virtualised PFSense on HP dual 6 core Gen7's with great results (NVME's, SSD's lotsa RAM and high spec CPU's). All fairly cheap and accessible hardware platforms. What I like most about VM's on Proxmox with ZFS is the ability upgrade the underlying hardware using ZFS send/recv (Sanoid/Syncoid) between devices with minimal downtime.

PFSense can be migrated across hardware platforms natively via the built in backup/restore function, paying attention to interface assignment. Having spare hardware and a test environment at your disposal helps a lot and is something I would recommend.
Ah, interesting that you are running it native and not using it for any other purpose. I was under the impression that the T730 is powerful enough to be virtualized for other tasks. I am hoping to keep my power consumption on the lower side and also not have high load factors on my CPU after virtualizing. I did read that there is an option of virtualizing within freebsd too - not sure if anybody has tried that yet.

Proxmox will do SR-IOV but you won't have a ton of success with the actual hardware because the T730 doesn't support PCIe ACS. You can kind of hack it to work but various drivers will offer functionality between "not working at all" and "kind of broken" without ACS. You're probably best off using openvswitch and dpdk or just accepting the overhead from standard virtualization.
that's a shame - I was under the impression that the hardware supported it (unlike the T620 plus). I plan on using this with a 4 port Intel gigabit nic, so I can go the route of disable the onboard realtec nic too. If I were to go the route of openvswitch and dpdk, how much of an overhead would I be looking at? Will I still be able to run one more VM with dockers?
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Ah, interesting that you are running it native and not using it for any other purpose. I was under the impression that the T730 is powerful enough to be virtualized for other tasks. I am hoping to keep my power consumption on the lower side and also not have high load factors on my CPU after virtualizing. I did read that there is an option of virtualizing within freebsd too - not sure if anybody has tried that yet.

that's a shame - I was under the impression that the hardware supported it (unlike the T620 plus). I plan on using this with a 4 port Intel gigabit nic, so I can go the route of disable the onboard realtec nic too. If I were to go the route of openvswitch and dpdk, how much of an overhead would I be looking at? Will I still be able to run one more VM with dockers?
Well, the t730 is sold as an overpowered thin client, so whatever virtualization magic we are doing on it...is lagniappe/gravy (in fact, I heard that the later BIOS will disable SVM and require the feature to be manually re-exposed). I don't even think that most of the Intel NUCs out there will do PCIe ACS (it's a bit of a niche feature) so I don't hold against it if it doesn't do SRIOV. If you don't want to deal with the Realtek there's always the fiber NIC, which is Broadcom Tigon based.

I was supposed to receive a t740 thin client for evaluation from one of our channel guys this month, but...well, most of them are backordered due to some mysterious reason...
 

fossxplorer

Active Member
Mar 17, 2016
554
97
28
Oslo, Norway
That future BIOS version with SVM disabled worries me a bit. Is that so that we'd need to hack the BIOS to get the SVM enabled?
One option is to not upgrade to newer BIOS..but...
 

vudu

Member
Dec 30, 2017
62
22
8
63
Ah, interesting that you are running it native and not using it for any other purpose.
In my usage scenario. Power usage is less of a concern and as a native PFSense box this thing is FAST! I do have a i5 32GB RAM NUC with Proxmox for testing and it is fairly capable if power is an issue.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
I was supposed to receive a t740 thin client for evaluation from one of our channel guys this month, but...well, most of them are backordered due to some mysterious reason...
Ironically China's economy is going to be back up and running a hell of a lot faster than the wests'. I've heard estimates of 2-3 months lag time on orders from the big hardware companies if what you're buying isn't already in a warehouse somewhere in NA, which isn't bad all things considered.

I've been tempted a couple of times to gut my DFI DT-122's and rebuild them with Ryzen mITX hardware just to avoid waiting the couple of years it'll take for T740s to drop to under $400 reliably on fleaBay.
 

TheJaguar

New Member
Jan 19, 2020
18
1
3
Well, the t730 is sold as an overpowered thin client, so whatever virtualization magic we are doing on it...is lagniappe/gravy (in fact, I heard that the later BIOS will disable SVM and require the feature to be manually re-exposed).
That future BIOS version with SVM disabled worries me a bit. Is that so that we'd need to hack the BIOS to get the SVM enabled?
One option is to not upgrade to newer BIOS..but...
Oh my - this is concerning for sure. v1.15 is safe right? that was the most recent release.

In my usage scenario. Power usage is less of a concern and as a native PFSense box this thing is FAST! I do have a i5 32GB RAM NUC with Proxmox for testing and it is fairly capable if power is an issue.
got it - not planning on running too many VM's - 2 to 3 at most. Let's see how it goes...