Some information about HP T620 Plus Flexible Thin Client machines for network appliance builds...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
@WANg finally got the sodimms through customs..
OwnDrive

more text later

EDIT1: so i can confirm 2x16G SODIMMs are working with T620 plus. :D
On the flip side, it seems the seller of the DIMMs sold me without testing so when i woke up this morning, MemTest86 8.1 UEFI version gave errors. Now i'm running the memtest with only one of the DIMMs since one DIMMs had a marking of I or L on 2 of the chips, which i find very strange (was it kind of marked for error/broken?). So far after 1#4 round of the other DIMM, it's stilling running wihtout any error.

It's now been over 2 months with delays etc, so i doubt i can claim any refund?
Hm - thanks. So it looks like the T620 Plus can rock 32GB of RAM max. What's the configuration on those RAM? How many ranks, what's the speed/CL/voltage values on them? I wonder if this will work across the board for all GX420CA hardware (like a certain Arista 40GbE switch with the same embedded SoC)...

...And yeah, I doubt the seller will take it back now.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Maybe these will finally get updated now that AMD announced new goodies.

Of course, new = DDR4 = $$$$$$$
Not likely soon. Thin client play to a totally different beat of the drums - over there it's really about reliability and long term parts availability, and the R-series "eagles" embedded APUs are offered with 10 years of committed productional availability. If you look at HP's offerings versus what their competitors have (Dell/Wyse, 10Zig, LG, etc), the t620/630/730 series are already top-of-the-line in terms of specs. Most 10Zig / Dell Wyse models uses weaker processors and often soldered / single socketed DIMM units.
 
  • Like
Reactions: SwanRonson

SwanRonson

Member
Sep 27, 2018
33
3
8
Not likely soon. Thin client play to a totally different beat of the drums - over there it's really about reliability and long term parts availability, and the R-series "eagles" embedded APUs are offered with 10 years of committed productional availability. If you look at HP's offerings versus what their competitors have (Dell/Wyse, 10Zig, LG, etc), the t620/630/730 series are already top-of-the-line in terms of specs. Most 10Zig / Dell Wyse models uses weaker processors and often soldered / single socketed DIMM units.

Darn. I suppose it was wishful thinking, hoping power draw would be more important than it is ;)
 

adamb

New Member
Feb 1, 2019
6
1
3
Hi all,
first post here...looking for any experience people have had using a T620 plus and running PfSense virtualised under ESXi (currently running on 6.7 with VMXNET3 driver) . I put a 128gb sandisk m2 drive in mine and 16gb of RAM and have PfSense 2.4.4p2 and a ubuntu linux vm running fine or at least that was my initial impression. I have a 4 port Intel I340 NIC installed and a dedicated port assigned to each workload (1 port for WAN, 1 for lan on pfsense, 1 for lan on other vm, 1 for management). I had a T610 plus previously running Pfsense natively with just 4gb ram and 2 core cpu and it never broke a sweat and i always hit max throughput on my tests.. It seems running PFSense under ESXi and using virtual nics there may be some issue with how interrupts are being handled by OpenBSD underneath the hood. Whenever i hit my virtual PFSense instance with a lot of traffic the CPU use rockets and I can see the interrupt handler for each vmx NIC chewing through CPU cycles. Just wondered if anyone else had pfsense virtualised on similar hardware and had seen anything like this? Also I'm not getting anywhere near max. If needs be i'll install PFSense natively but i kind of got this where i want it from a use point of view as i use the other linux vm for 'always on' activities negating the need to have another machine running.

Anyway, any feedback appreciated.

thanks
Adam.
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Hi all,
first post here...looking for any experience people have had using a T620 plus and running PfSense virtualised under ESXi (currently running on 6.7 with VMXNET3 driver) . I put a 128gb sandisk m2 drive in mine and 16gb of RAM and have PfSense 2.4.4p2 and a ubuntu linux vm running fine or at least that was my initial impression. I have a 4 port Intel I340 NIC installed and a dedicated port assigned to each workload (1 port for WAN, 1 for lan on pfsense, 1 for lan on other vm, 1 for management). I had a T610 plus previously running Pfsense natively with just 4gb ram and 2 core cpu and it never broke a sweat and i always hit max throughput on my tests.. It seems running PFSense under ESXi and using virtual nics there may be some issue with how interrupts are being handled by OpenBSD underneath the hood. Whenever i hit my virtual PFSense instance with a lot of traffic the CPU use rockets and I can see the interrupt handler for each vmx NIC chewing through CPU cycles. Just wondered if anyone else had pfsense virtualised on similar hardware and had seen anything like this? Also I'm not getting anywhere near max. If needs be i'll install PFSense natively but i kind of got this where i want it from a use point of view as i use the other linux vm for 'always on' activities negating the need to have another machine running.

Anyway, any feedback appreciated.

thanks
Adam.
Okay - several things.

a) Is AMD-V enabled on your t620 Plus? How much physical RAM does the t620 plus have and how is the RAM allocated to the guest VMs?
b) Which version of ESXi? 5.5, 6.0, 6.5 or 6.7?
c) Which firmware/BIOS release is it? I remember someone on the t620 Plus thread mentioning that CPU utilization went up and NIC throughput went down after that release.
d) How is traffic passed into the pfsense VM? Does it cross the built-in Realtek NIC (which is known to be problematic)? Are you using e1000e or vmxnet3 as the vNIC on the pfsense VM?
 

rpotter28

New Member
Nov 20, 2018
6
0
1
For anyone wondering, I upgraded to Gigabit fibre and my T620 plus is handling it just fine. Just did a speed test:

new.jpg
 

adamb

New Member
Feb 1, 2019
6
1
3
Okay - several things.

a) Is AMD-V enabled on your t620 Plus? How much physical RAM does the t620 plus have and how is the RAM allocated to the guest VMs?
b) Which version of ESXi? 5.5, 6.0, 6.5 or 6.7?
c) Which firmware/BIOS release is it? I remember someone on the t620 Plus thread mentioning that CPU utilization went up and NIC throughput went down after that release.
d) How is traffic passed into the pfsense VM? Does it cross the built-in Realtek NIC (which is known to be problematic)? Are you using e1000e or vmxnet3 as the vNIC on the pfsense VM?
 

adamb

New Member
Feb 1, 2019
6
1
3
RAM : 16gb (4GB of ram allocated to each VM) and 4 cores to each
ESXi : 6.7
AMD-V : Enabled
Bios : will check tonight. it's a 2016 release from memory
NICS: All VM's using Intel I340 4 port NIC card. Realtek not configured or used. Ports allocated inside VMARE as previously described. (VMXNET3)

I'll try applying the latest BIOS and see what happens.

<Edit>

So i applied the latest 2018 BIOS and no change sadly. Still seeing large CPU spike when hitting my link.

Since last reboot about 20 mins ago..

2.4.4-RELEASE][admin@pfSense.localdomain]/root: vmstat -i
interrupt total rate
irq1: atkbd0 1 0
irq18: uhci0 1523 2
cpu0:timer 40012 48
cpu3:timer 34319 41
cpu2:timer 31058 37
cpu1:timer 37150 45
irq256: ahci0 422 1
irq258: mpt0 10950 13
irq267: vmx0 322473 386
irq276: vmx1 244030 292
Total 721938 865


these are the number and rate of interrupts on the vmx nics (vmx0 is wan, vmx1 is lan).

I've no idea if these are typical or not but anyway it seems the issue remains for now.

Any other feedback welcome.

thanks
Adam
 
Last edited:

yodaphone

New Member
Feb 21, 2019
3
0
1
So after going to update my in-use t620's BIOS and realizing what a PITA it is, I developed an easier way. Some posts and HP readme's reference an easy to use "flash ROM" option in the BIOS, but none of my T620's have had this. note: all the below applies to both the t620 and t620 plus, same BIOS for both

Also, HP's site is a ****ing mess, as always (I do not miss having to use their servers). If you run through all the OS options on their drivers download page, it turns out the very latest BIOS version (released four months ago!) - v2.17, is hidden under "windows 7 embedded" - because it's HP and **** you that's why

I noticed the BIOS download comes with an EFI shell application to update the BIOS, so I simply used the open source EFI shell bin from the EDK2 project - https://github.com/tianocore/edk2 to create a bootable EFI shell image, and stuck the HP bios update efi application in the root of it, renaming it to update-t620.efi - I then wrote all this out to an ISO file linked here: https://fohdeesha.com/data/other/t620-bios-v218.iso

so just use your favorite tool to write that to a USB drive (GPT or MBR, doesn't matter), and EFI boot the thumb drive on the t620. Make sure EFI boot devices are enabled and secure boot is disabled in the bios settings. It'll boot to a simple EFI command line - just type update-t620 and it'll begin the BIOS update. Once it finishes, just power off/reboot and remove your flash drive

EDIT: updated iso to v2.18 bios

Will BIOS work for T620 Plus or is it only for t620?
 

Flazabancy

New Member
Dec 20, 2018
11
8
3
Hiya-

>RAM : 16gb (4GB of ram allocated to each VM) and 4 cores to each
Not sure if the above is a typo, but I think there are only 4 cores total in the T620p. Can you over provision cores in esxi? Might be causing CPU utilization issues.
 

adamb

New Member
Feb 1, 2019
6
1
3
Hiya-

>RAM : 16gb (4GB of ram allocated to each VM) and 4 cores to each
Not sure if the above is a typo, but I think there are only 4 cores total in the T620p. Can you over provision cores in esxi? Might be causing CPU utilization issues.
fair comment but it behaves in the same manner even with the other server powered off. I had to add 4 cores to prevent it running at 100% cpu everytime my link maxed out (15mbits up 200 mbits down).

I'll probably drop the virtualisation anyway and go to bare metal install again. Just nice to have ESXi as it means you can spin up new versions or apply patches or changes and if it all goes wrong roll back to your snapshot :)

But at the end of the day I'd rather have it be performant.

I think this is a common issue (from reading elsewhere) and I am guessing the there is not an easy fix. In fact this could well be expected behaviour which is just more manifest when running on low end hardware.

Appreciate the responses nonetheless.

regards
Ad.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
fair comment but it behaves in the same manner even with the other server powered off. I had to add 4 cores to prevent it running at 100% cpu everytime my link maxed out (15mbits up 200 mbits down).

I'll probably drop the virtualisation anyway and go to bare metal install again. Just nice to have ESXi as it means you can spin up new versions or apply patches or changes and if it all goes wrong roll back to your snapshot :)

But at the end of the day I'd rather have it be performant.

I think this is a common issue (from reading elsewhere) and I am guessing the there is not an easy fix. In fact this could well be expected behaviour which is just more manifest when running on low end hardware.

Appreciate the responses nonetheless.

regards
Ad.
Whoops, sorry, was fairly busy over the weekend. Anyways:
a) While the GX420CA is a decent little chip (used in the Arista 10/40 GbE switches), it's not great for virtualization. The Jaguar/Puma cores are only so-so (around 2400 Passmarks, or around the performance of a Intel Core i5-540M), and that single channel RAM controller means that any memory copying operations simply, well, suck.

b) If you want to use hypervised pfsense, get something that can do SR-IOV / PCIe passthrough, so the hypervisor won't have to handle packets and flip it into the VMs - that's always computationally expensive, and running it as vmxnet3 simply makes it worse (since the e1000e emulation in VMWare is self-throttling, while vmxnet3 implies pushing it as hard as the hardware can handle). The GX420CA in the t620 Plus does not have SR-IOV support, so no PCIe passthrough is possible AFAIK. Technically the RX427BB APU in the t730 thin clients can do it (as can certain Core i5/i7s from, say, Haswell and up), but the problem is like this:

- You need something that can support PCIe access control services (ACSCtl flag in lspci -vv), since you'll need that so SR-IOV can work with PCIe passthrough sanity check (you don't really want SR-IOV to start passing stuff to device groups that don't make sense). That's dependent on hardware capability and BIOS/UEFI implementations. Even if your CPU can do, say, VT-d or AMDVi, it doesn't mean that ACS is baked in. If it's not baked in, well, you'll need a Linux kernel patch to get around the lack of ACS. Also, due to no ACS implementation, well, you won't get Alternative Routing-ID Interpretation (ARI), so you can only pass up to 7 SRIOV Virtual Functions. So as you can guess, hackery like this is not supported by VMWare ESXi - this is strictly Linux. Both the t620 Plus and the t730 does not have VMDirectPath or passthrough support in ESXi. Not sure about the DFI DT122-BE (the cheap noisy industrial computer equivalent to the t730), but I am going to say...very unlikely.

- Your network card must be able to hand out SRIOV Virtual Functions to the hypervisor so the hypervisor can dish it out to the guest VMs - that's supported in most Mellanox, Intel and Solarflare cards.

- Your network card must have driver support in FreeBSD to work with virtual functions as a VM Guest.
That's fairly problematic - as far as I can tell, only Intel 82599/i350s and newer can work with them. Solarflares and Mellanoxes both don't seem to work well with VFs.

So yeah, you are much better off running the t620 Plus on bare metal pfsense, and if you really need the hypervisor capability, use bhyve to run VMs on it instead.
 

adamb

New Member
Feb 1, 2019
6
1
3
Whoops, sorry, was fairly busy over the weekend. Anyways:
a) While the GX420CA is a decent little chip (used in the Arista 10/40 GbE switches), it's not great for virtualization. The Jaguar/Puma cores are only so-so (around 2400 Passmarks, or around the performance of a Intel Core i5-540M), and that single channel RAM controller means that any memory copying operations simply, well, suck.

b) If you want to use hypervised pfsense, get something that can do SR-IOV / PCIe passthrough, so the hypervisor won't have to handle packets and flip it into the VMs - that's always computationally expensive, and running it as vmxnet3 simply makes it worse (since the e1000e emulation in VMWare is self-throttling, while vmxnet3 implies pushing it as hard as the hardware can handle). The GX420CA in the t620 Plus does not have SR-IOV support, so no PCIe passthrough is possible AFAIK. Technically the RX427BB APU in the t730 thin clients can do it (as can certain Core i5/i7s from, say, Haswell and up), but the problem is like this:

- You need something that can support PCIe access control services (ACSCtl flag in lspci -vv), since you'll need that so SR-IOV can work with PCIe passthrough sanity check (you don't really want SR-IOV to start passing stuff to device groups that don't make sense). That's dependent on hardware capability and BIOS/UEFI implementations. Even if your CPU can do, say, VT-d or AMDVi, it doesn't mean that ACS is baked in. If it's not baked in, well, you'll need a Linux kernel patch to get around the lack of ACS. Also, due to no ACS implementation, well, you won't get Alternative Routing-ID Interpretation (ARI), so you can only pass up to 7 SRIOV Virtual Functions. So as you can guess, hackery like this is not supported by VMWare ESXi - this is strictly Linux. Both the t620 Plus and the t730 does not have VMDirectPath or passthrough support in ESXi. Not sure about the DFI DT122-BE (the cheap noisy industrial computer equivalent to the t730), but I am going to say...very unlikely.

- Your network card must be able to hand out SRIOV Virtual Functions to the hypervisor so the hypervisor can dish it out to the guest VMs - that's supported in most Mellanox, Intel and Solarflare cards.

- Your network card must have driver support in FreeBSD to work with virtual functions as a VM Guest.
That's fairly problematic - as far as I can tell, only Intel 82599/i350s and newer can work with them. Solarflares and Mellanoxes both don't seem to work well with VFs.

So yeah, you are much better off running the t620 Plus on bare metal pfsense, and if you really need the hypervisor capability, use bhyve to run VMs on it instead.

thanks so much for the detailed response. After a more digging I'd kind of come to that conclusion and was starting the search for a couple of T730s to use as my lab/vm nodes along with some I-350 based intel cards (not clones, i've seen the posts about those!).

I'll have a look at bhvve though...

thanks.
Ad.
 

DaveO

New Member
Mar 12, 2019
2
0
1
For anyone that has upgraded the BIOS to the 2018 release, is there a way to enable virtualization? It was there in the previous versions, but I don't see it in the newest version.
 

adamb

New Member
Feb 1, 2019
6
1
3
For anyone that has upgraded the BIOS to the 2018 release, is there a way to enable virtualization? It was there in the previous versions, but I don't see it in the newest version.

From memory I believe it's under the security menus.

Thanks
 
  • Like
Reactions: DaveO

yodaphone

New Member
Feb 21, 2019
3
0
1
Hi

a newbie here

got the t620plus and installed a 4 port supermicro Intel NIC.

even though i have configured pfsense with the 4 port NIC, when i reboot the machine pfsense picks the re0 card and stops.

is there a way to disable the built in realtek NIC? didnt see anything in the BIOS for this