HP T740 Thin Client Review TinyMiniMicro with PCIe Slot

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
Nice. So what, it has enough airflow to actually permanently put something like dual sfp28 in it and be okay?
What did you guys actually run(and for how long) in this system?
 
  • Like
Reactions: iq100

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
Nice. So what, it has enough airflow to actually permanently put something like dual sfp28 in it and be okay?
What did you guys actually run(and for how long) in this system?
Mellanox ConnectX3 VPI, specifically. And it's not really the airflow as much as the card being rather efficient (6-10w max thermals) - it was working fine previously on the t730 so it was just a straight-up swap for the t740, which was running almost 24/7 since Feb 2021 (minus some downtime so the N40L underneath can get TrueNAS version upgrades, a PSU replacement back in June, or for installing a UPS in front of it so a lightning strike won't kill the PSU anymore...).
 
Last edited:

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
You mean the 353A-FCBT(xflashed or whatever)? Or even dual QSFP+?
MCX354A-FCBT. It's a fairly common EOL dual 40GbE card, often going for 35-45 dollars on evilbay. Combine it with some passive QSFP DACs (something like a NetApp X6595-R6) and it's a quick, fast, and very efficient way to get into 40GbE. Too bad Mellanox skimped on the drivers and dropped RoCE/RDMA support for ESXi 7.
 
Last edited:
  • Like
Reactions: tinfoil3d

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
@WANg do you pronounce it "waang" or "wong"? I pronounce it "wong" per the character.
The former. It was supposed to be a tribute to Wang An (The pioneer Chinese-American computer engineer and inventor), Charles B. Wang (the former cofounder of Computer Associates, major philanthropist and former New York Islanders owner)...and the penis joke. Yeah, it's supposed to be more like wong (suggesting Cantonese roots, or huang, which is a common phonetic variation), but even the workers at the Charles B. Wang Health center in the Manhattan Chinatown pronounce it waang, so I stick with that pronunciation. it's also funnier that way.
 
Last edited:
  • Like
Reactions: klui

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
MCX354A-FCBT. Too bad Mellanox skimped on the drivers and dropped RoCE/RDMA support for ESXi 7.
Holy cow. Seriously? What does mget_temp say? Did you add any other fan than the CPU one?

Very upset with them dropping cx3, it still feels like yesterday. Very good product.
 

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
The former. It was supposed to be a tribute to Wang An (The pioneer Chinese-American computer engineer and inventor), Charles B Wang (the former cofounder of Computer Associates, major philanthropist and former New York Islanders owner)...and the penis joke. Yeah, it's supposed to be more like wong (suggesting Cantonese roots, or huang, which is a common phonetic variation), but even the workers at the Charles B. Wang Health center in the Manhattan Chinatown pronounce it waang, so I stick with that pronunciation. it's also funnier that way.
Is it okay if we'd be calling you 1G (wan gee)? Or SFP? :cool:
 

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
Holy cow. Seriously? What does mget_temp say? Did you add any other fan than the CPU one?

Very upset with them dropping cx3, it still feels like yesterday. Very good product.
Surprisingly? Temp's been pretty good for the past 11 months - average thermals is probably about 6 watts, which is easily handled by natural convection. It's a dual 40Gbit link serving iSCSI off a raidz1 (to be upgraded to raidz2 later) array via 4 7200 rpm spinners, and from an ancient machine (HP MicroServer G7/N40L), so it's way underutilized (around 250MB/sec max sustained). Still, it was mostly done with future growth in mind (c'mon HPe, MSG11 with something efficient...now?) and it did replace a SolarFlare SFN5122F, which ran at PCIe 2.0x8 and dual 10Gbit for around that power envelope, so not much complaints there.

Oh, I should mention that the Mellanox was on the t730 first, which was a slower, hotter running machine with the unfortunate design feature of placing the RAM/APU heatsink brace directly on top of the NIC. Still works just fine.
 
Last edited:
  • Like
Reactions: tinfoil3d

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
Oh yeah, I should mention that there is a potentially interesting feature with the t740 that can give you an extra 2 ports.
On the SKUs without the WLAN+BT card, there's a vacant M.2A+E slot near the clock battery....like so. Getting to it is usually a pain since the ribbon for the USB2 ports will get in the way, and it sits next to the blower fan...

E44445B9-8F95-4A86-9104-88A2B8337080.jpeg

However, if you put a M.2 A+E (2230) to MiniPCIe extender with a long enough FFC ribbon, and then put something on the end of that MiniPCIe slot, something interesting happens (how you package it later to make it work is up to you, but the end that goes to the ports is an SFF8087 cable, so it can punch through the option port, or you can 3D print a bracket for it)...

6253EFA9-E074-4DE1-9710-A3C6F6C918AC.jpeg

Screen Shot 2022-01-06 at 12.16.41 AM.png

Screen Shot 2022-01-06 at 12.17.29 AM.png

I can has extra 2 ports? 2-3 watts extra thermal won't do it much harm - It won't give you SRIOV in ESXi 6.5, but it certainly will in proxmox...

(the same trick should work on the t540/t640, but you might want to consider using a small fan to put some airflow in the chassis).
 
Last edited:

tinfoil3d

QSFP28
May 11, 2020
901
426
63
Japan
Yeah done that mPCIe extension with other old machines, pretty cool there's alot of stuff out there these days.
The only feature we really miss is breakout on qsfp+ ports on mlx cards. Imagine having 8 separate network interfaces or even 4 with just a 353A? That would have been beyond awesome.
 

bryan_v

Active Member
Nov 5, 2021
143
72
28
Toronto, Ontario
www.linkedin.com
@WANg or whoever, has anyone tried getting VFs to work with an Intel NIC like a X710 or E810?

I tried to get VFs for an X710 to work in an M720q and kept on hitting a BIOS/BAR problem where Debian and CentOS refused to allocate the VFs.
 

bryan_v

Active Member
Nov 5, 2021
143
72
28
Toronto, Ontario
www.linkedin.com
@WANg yeup, I know you had a problem with VFs with the Twinville card (X520), which I also agree might have something to do with how the BIOS is playing with the OS. It is also the same problem I had with the M720q, albeit you got alot further that I did (CentOS just refused to allocate the VFs, but had no problem doing a pass through of one of the of the NIC ports directly to a VM). The Fortville cards (X710) are supposed to have much better support for VF NICs via SR-IOV, as well as better guest VM support as well (i.e. there's a dedicated linux driver for the VF NICs that looks like it's updated almost every two months.)

I think csp-guy had tried the X710-T4L which is an inferno so never got far enough, but the SFP+ cards like the X710-DA2 have a much smaller thermal envelope, especially when paired with DAC.

The power of a Fortville, and Columbiaville, or at least why I'm a fan of them, is their ability to use the card like a virtual switch via SR-IOV and Flow Director. For example I can get almost 50Gbps-80Gbps between two guest VMs with SR-IOV NIC ports from the same X710-DA2 card, with no extra software or configuration, and essentially zero CPU overhead. Using OVS, which is pretty much standard across the board, will max out at 25Gbps while chewing through CPU cycles. This means even though the hardware on the ports is 10G or 40G, it's possible to max out the PCIe bandwidth as long as you're routing between VMs on the same box.

Right now at work, the devs are using this feature (albeit on beefier Tyan boxes) to automate load testing of apps without being network bound. At home though, you could theoretically simulate a SAN or a cluster environment, without any CPU overhead, and everything contained on one box. I personally want to get DPDK working so I can route >5Gbps, without using TNSR (since PFsense/BSD maxes out any CPU at ~5Gbps).

The one option I hadn't tried, and not super keen to try either, is to recompile Debian with additional BAR space, bypassing the BIOS limitation, to see if it fixes the problem in M720q (Jeff Geirling actually gave me the idea based on his RasPi issues). If the T740s don't have this issue, I'd totally switch (also because I've been a fan of AMD embedded APUs ever since I got my first ALIX box from PC Engines and loaded PFSense on it)
 

WANg

Well-Known Member
Jun 10, 2018
1,338
993
113
46
New York, NY
@WANg yeup, I know you had a problem with VFs with the Twinville card (X520), which I also agree might have something to do with how the BIOS is playing with the OS. It is also the same problem I had with the M720q, albeit you got alot further that I did (CentOS just refused to allocate the VFs, but had no problem doing a pass through of one of the of the NIC ports directly to a VM). The Fortville cards (X710) are supposed to have much better support for VF NICs via SR-IOV, as well as better guest VM support as well (i.e. there's a dedicated linux driver for the VF NICs that looks like it's updated almost every two months.).

I think csp-guy had tried the X710-T4L which is an inferno so never got far enough, but the SFP+ cards like the X710-DA2 have a much smaller thermal envelope, especially when paired with DAC.

The power of a Fortville, and Columbiaville, or at least why I'm a fan of them, is their ability to use the card like a virtual switch via SR-IOV and Flow Director. For example I can get almost 50Gbps-80Gbps between two guest VMs with SR-IOV NIC ports from the same X710-DA2 card, with no extra software or configuration, and essentially zero CPU overhead. Using OVS, which is pretty much standard across the board, will max out at 25Gbps while chewing through CPU cycles. This means even though the hardware on the ports is 10G or 40G, it's possible to max out the PCIe bandwidth as long as you're routing between VMs on the same box.

Right now at work, the devs are using this feature (albeit on beefier Tyan boxes) to automate load testing of apps without being network bound. At home though, you could theoretically simulate a SAN or a cluster environment, without any CPU overhead, and everything contained on one box. I personally want to get DPDK working so I can route >5Gbps, without using TNSR (since PFsense/BSD maxes out any CPU at ~5Gbps).

The one option I hadn't tried, and not super keen to try either, is to recompile Debian with additional BAR space, bypassing the BIOS limitation, to see if it fixes the problem in M720q (Jeff Geirling actually gave me the idea based on his RasPi issues). If the T740s don't have this issue, I'd totally switch (also because I've been a fan of AMD embedded APUs ever since I got my first ALIX box from PC Engines and loaded PFSense on it)
Eh, I never actually tested the X520 with proxmox, and I know for a fact that SRIOV will not work with ESXi for anything, mostly because ESXi doesn't seem to play well with consumer machines in general.

I did test it with the Solarflare 7122 and the Mellanox CX3 - I mean, I would love to buy an X710-DA2 for testing purposes, but they are around 200 each on evilBay (I got the X520 simply because it was less than 50 bucks for a Supermicro branded SFF model).

So, several things to remember about the t740:

a) SR-IOV works (access control services, or ACSCtl+), but PCIe Alternative Routing Interpretation (ARIFwd) is not present, so you are limited to passing up to 7 VFs in total on the system. That's why it was a slight cringe moment when @Patrick mentioned it on the video. It's there but it's a limited feature.

b) Resizable BAR wasn't really a thing on the AMD consumer machines, and while it is theoretically possible on anything Intel Haswell and beyond, it's not usually implemented. That's why BAR usually stays at around 16MB no matter how you try to resize it using userspace toggles.

Supposedly AMD rolled out support for the 2000 series Raven Ridge Ryzen APUs (the V1756B is really just a Ryzen 5 2600H but with more thermal tolerance and longer availability), it depends on the BIOS and whether it incorporates the AGESA version (1.1.1) that enables it - some B450 boards got them, but that's up to the vendor. Not sure if the AGESA updates will make it to the HP t740...ever. Proxmox has resizable BAR in its kernel from what I remember, but unless the hardware plays ball, it wouldn't work.
 
Last edited: