Build’s Name: The stealth environment
Operating System/ Storage Platform:
Thin client: VMWare ESXi 6.5U3
N40L: iXSystems FreeNAS 11.1U7
CPU:
Thin client: AMD RX427BB (Bald Eagle platform, Steamroller quad cores)
N40L: AMD Turion II Neo N40L (K10/Geneva, dualcores)
Motherboard:
n/a (both are HP Proprietary)
Chassis:
Thin client: HP t730 Thin client
N40L: HP Microserver Gen 7/N40L
Drives:
Thin client: 32GB Sandisk SSD, M.2 Key-BM
N40L: 1 Toshiba 16GB consumer class Thumbdrive + 4 HGST Deskstar NAS 4TB drives
RAM:
Thin Client: 2x16GB DDR3L 204 Pin Laptop DIMM (Nemix V1L3SF16GB1G81G816)
N40L: 2x8 GB DDR3 240 Pin desktop DIMM (G.Skill Ares F3-1333C9D-16GAO)
Add-in Cards (Originally)
Thin Client: SolarFlare SFN5122F
N40L: Solarflare Flareon SFN7322F
Add-in Cards (Current as of Mid-2019)
Mellanox ConnectX3 VPI (MCX354As) - on both ends
Power Supply:
n/a (Both are stock HP, although the thin client uses a 90W HP 7.4mm "black ring tip" type Notebook barrel power brick)
Other Bits:
A pair of generic 1m QSFP28 Twinax cables
Usage Profile: Small, quiet, low profile, secure server/VM hosting environment for the home office
Other information…
Okay, so here's the situation - I live in a small-ish NYC apartment, and I also run IT for a mid-size company. I do quite a bit of self-study on the side and would need to reproduce issues on a constant basis. I also have a crapload of personal digital artifacts (photos of past vacations, archives of old software) that I would need to stash away in a reasonably safe and redundant environment. The missus and I treat our home as like our oasis of relaxation, so the last thing I want is a massive rackmount environment with loads of fans making an appreciable dent on the electric bills. So back in 2011 I picked up a Microserver Gen 7, which was my VMWare ESXi5 environment with a bunch of decommissioned desktop drives ranging from 250 to 2TB. However, the machines does not do any type of RAID, and the drives are tossed into the machine as a bunch of JBOD environments as multiple data stores. Of course, it's 2018, and a netbook processor with a CPUMark bench score of 930 is not going to cut it (even though all I run are really just a CentOS kickstart environment, an rsync setup for locally caching the CentOS repositories, a DNS server (djbdns/dnscache), a Debian FAI environment, a triplet of Juniper JunOS Olive VMs, and a Windows XP VM)
So, what to do? Well, I recently picked up an HP t730 Thin client on eBay for $200, which is a fairly high end device that can support 4 LCD panels simultaneously via Displayport - quadcore AMD, Radeon graphics, 8GB RAM and 32GB SSD-DOM that can support both Windows 10 IoT (licensed) and an HP simplified version of Ubuntu Linux called ThinPro (both comes with some nice disk image management utilities). HP RGS/RDP/NoMachine NX/ThinLinc aside, it's also more powerful than earlier assumed. I also have some leftover RAM from my gig, and 4 HGST 4TB NAS drives that I picked up from Newegg during Black Friday last year.
Now. in terms of what was / needs to be done:
a) Shut down and migrate the VMs off the HP Microserver G7 onto another machine via VMWare Converter.
b) Shutdown the Microserver Gen 7, swap the VMWare ESXi OS (mounted on an 8GB thumbdrive) inside for something that can manage 4 drives - like iXSystems FreeNAS, which is FreeBSD based. Bonus - it can be installed on a thumbdrive, will support zfs (so zraid pools are totally feasible) and can do iSCSI serving of the zraid extents. Then swap the old drives out for the 4TB drives.
c) Setup something that can serve the bits out via an iSCSI network, maybe a quadport 1GbE NIC using teaming, or perhaps 10GbE. I just so happen to have a pair of SolarFlare Flareon SFN7322F 10GbE PTP cards, and a bunch of SFN5122F 10GbE cards.
d) Setup a VMWare ESXi 6.5 ISO that has the Realtek R8169 and Solarflare drivers pre-baked, and then image the ISO into an installer thumb drive. Install the OS into the 32GB SSD in the t730 thin client, install 16/32 GB of RAM into the Thin client, and then add a 10GbE card. Fire up ESXi on the thin client. test/make sure everything is working to this point.
e) Wire the 2 machines up using a pair of SFP+ Twinax cables, configure the FreeNAS box to do raidz1 on the 4 drives, configure the 10GbE ports into an iSCSI network, then configure an iSCSI target with an extent mounted to the volume. On the thin client side, configure things to work as a multipath SCSI initiator, and then mount the extent.
f) Once the ESXi boots up and mount the iSCSI drive, migrate the VMs back in. Test and verify accordingly.
g) On the same big zraid1 pool, setup a SMB Share that can be used to consolidate and retire the myriad of network drives currently situated in the house.
h) Figure out what to do about offline backups. Something LTO4 or 5 based, and preferably SATA or eSATA capable (need to hang off the MicroServer G7, and preferably can be driven by the FreeNAS GUI).
This is how the current setup looks like in the house. I can post some photos up of the work done.
So, why use a t730 instead of, say, a NUC, or a Microserver Gen 8 or 10? And why de-couple the storage from the processing?
- It's cheap. Did I mention that you can pick one up on eBay for about $200, and there are quite a few available? Most thin clients have practically no resale value on the secondary market whatsoever as it is a bit of a niche device. If I choose to upgrade the ESXi environment later on, well, I still have a powerful quad screen thin client, which can be repurposed for, gosh, anything. Emulate a Playstation 4 with it one day? (no, just kidding. Not enough GPU horsepower, but definitely okay for PPSSPP). Add a pair of Mellanox Connect x4s, Chelsio Terminator T420s or SolarFlare SFN-5122Fs (remember to specify low form factor for the bracket), it'll probably be an extra $60 or so. Add a pair of generic SFP+ DAC cables and you'll be done at around $300. The t730 is usually shipped to you ready-to-go with 8GB RAM and a small SSD. If you buy a NUC you usually have to buy some extras - even for an older bundle with a Haswell or Broadwell i5/7, it's still going to be $400 at the very least. Besides, how many $200, quad DisplayPort capable NUCs with dual 10GbE networking support do you know of?
- The thin client is absolutely silent, and have no problems with the Solarflare SFN5122F inside (the Flareon will lock up the t730 after 10 minutes of use regardless of which OS was being used...my guess is that it is just too power hungry or runs too hot). The Flareon or the 5122F have no issues inside the N40L Microserver, and performance seems decent enough. Power consumption is quite low, around 30w in most cases.
- The performance is actually pretty good. CPUMark for the RX427BB is 4227, which is not too far from the 4420 CPUMark quoted for the Xeon E3-1220Lv2 found in some HP Microserver Gen 8s. It's definitely faster than the G1610T/G2020 on the base model Gen 8s, or oddly enough, the Opteron X3216 on the recently released Gen 10 Microservers. For the amount HPe charges for a Gen10, you are better off buying a t730 and keep the Gen 7 running as a dedicated NAS. As for competing against NUCs, it's not that bad for the price point as well - it's roughly the same performance of an i5-6260U (4366 CPUMark), but unlike the NUC, you can get 10GbE networking running on it.
- The machine has an official 16GB RAM ceiling, but 32GB will work as long as it's dualrank per unit (the SoC can only support quad ranks in total). This is analogous with most Broadwell and newer NUCs. A bigger issue though is validating its IOMMU for SR-IOV or VMDirectPath - in its existing form the BIOS does not support stuff like ARI (alternative routing ID) or PCIe ACS (access control services) nor does it allow enough MMIO space to handle VMDirectPath, even when the hardware is more than capable of handling it.
- Once you have the HP Microserver Gen7 configured as a 10GbE iSCSI box, well, you can reuse it again for the next upgrade. If I ever get around to that $2000 SuperMicro Xeon-D 1541 box (5028-TNT4), I can still use the Gen 7 as an attached NAS. I hate the idea of sending nominally useful hardware to the landfill, and this little black box has been with me since 2011 - No e-waste is allowed here. My only misgiving is that the N40L is still a 4 bay device. In theory I could use a custom caddy on the optical storage bay to add a few 2.5" 7mm SATA SSDs, but the resultant credit statement will earn me the evil eye from the missus (AKA the one who shall be obeyed).
Operating System/ Storage Platform:
Thin client: VMWare ESXi 6.5U3
N40L: iXSystems FreeNAS 11.1U7
CPU:
Thin client: AMD RX427BB (Bald Eagle platform, Steamroller quad cores)
N40L: AMD Turion II Neo N40L (K10/Geneva, dualcores)
Motherboard:
n/a (both are HP Proprietary)
Chassis:
Thin client: HP t730 Thin client
N40L: HP Microserver Gen 7/N40L
Drives:
Thin client: 32GB Sandisk SSD, M.2 Key-BM
N40L: 1 Toshiba 16GB consumer class Thumbdrive + 4 HGST Deskstar NAS 4TB drives
RAM:
Thin Client: 2x16GB DDR3L 204 Pin Laptop DIMM (Nemix V1L3SF16GB1G81G816)
N40L: 2x8 GB DDR3 240 Pin desktop DIMM (G.Skill Ares F3-1333C9D-16GAO)
Add-in Cards (Originally)
Thin Client: SolarFlare SFN5122F
N40L: Solarflare Flareon SFN7322F
Add-in Cards (Current as of Mid-2019)
Mellanox ConnectX3 VPI (MCX354As) - on both ends
Power Supply:
n/a (Both are stock HP, although the thin client uses a 90W HP 7.4mm "black ring tip" type Notebook barrel power brick)
Other Bits:
A pair of generic 1m QSFP28 Twinax cables
Usage Profile: Small, quiet, low profile, secure server/VM hosting environment for the home office
Other information…
Okay, so here's the situation - I live in a small-ish NYC apartment, and I also run IT for a mid-size company. I do quite a bit of self-study on the side and would need to reproduce issues on a constant basis. I also have a crapload of personal digital artifacts (photos of past vacations, archives of old software) that I would need to stash away in a reasonably safe and redundant environment. The missus and I treat our home as like our oasis of relaxation, so the last thing I want is a massive rackmount environment with loads of fans making an appreciable dent on the electric bills. So back in 2011 I picked up a Microserver Gen 7, which was my VMWare ESXi5 environment with a bunch of decommissioned desktop drives ranging from 250 to 2TB. However, the machines does not do any type of RAID, and the drives are tossed into the machine as a bunch of JBOD environments as multiple data stores. Of course, it's 2018, and a netbook processor with a CPUMark bench score of 930 is not going to cut it (even though all I run are really just a CentOS kickstart environment, an rsync setup for locally caching the CentOS repositories, a DNS server (djbdns/dnscache), a Debian FAI environment, a triplet of Juniper JunOS Olive VMs, and a Windows XP VM)
So, what to do? Well, I recently picked up an HP t730 Thin client on eBay for $200, which is a fairly high end device that can support 4 LCD panels simultaneously via Displayport - quadcore AMD, Radeon graphics, 8GB RAM and 32GB SSD-DOM that can support both Windows 10 IoT (licensed) and an HP simplified version of Ubuntu Linux called ThinPro (both comes with some nice disk image management utilities). HP RGS/RDP/NoMachine NX/ThinLinc aside, it's also more powerful than earlier assumed. I also have some leftover RAM from my gig, and 4 HGST 4TB NAS drives that I picked up from Newegg during Black Friday last year.
Now. in terms of what was / needs to be done:
a) Shut down and migrate the VMs off the HP Microserver G7 onto another machine via VMWare Converter.
b) Shutdown the Microserver Gen 7, swap the VMWare ESXi OS (mounted on an 8GB thumbdrive) inside for something that can manage 4 drives - like iXSystems FreeNAS, which is FreeBSD based. Bonus - it can be installed on a thumbdrive, will support zfs (so zraid pools are totally feasible) and can do iSCSI serving of the zraid extents. Then swap the old drives out for the 4TB drives.
c) Setup something that can serve the bits out via an iSCSI network, maybe a quadport 1GbE NIC using teaming, or perhaps 10GbE. I just so happen to have a pair of SolarFlare Flareon SFN7322F 10GbE PTP cards, and a bunch of SFN5122F 10GbE cards.
d) Setup a VMWare ESXi 6.5 ISO that has the Realtek R8169 and Solarflare drivers pre-baked, and then image the ISO into an installer thumb drive. Install the OS into the 32GB SSD in the t730 thin client, install 16/32 GB of RAM into the Thin client, and then add a 10GbE card. Fire up ESXi on the thin client. test/make sure everything is working to this point.
e) Wire the 2 machines up using a pair of SFP+ Twinax cables, configure the FreeNAS box to do raidz1 on the 4 drives, configure the 10GbE ports into an iSCSI network, then configure an iSCSI target with an extent mounted to the volume. On the thin client side, configure things to work as a multipath SCSI initiator, and then mount the extent.
f) Once the ESXi boots up and mount the iSCSI drive, migrate the VMs back in. Test and verify accordingly.
g) On the same big zraid1 pool, setup a SMB Share that can be used to consolidate and retire the myriad of network drives currently situated in the house.
h) Figure out what to do about offline backups. Something LTO4 or 5 based, and preferably SATA or eSATA capable (need to hang off the MicroServer G7, and preferably can be driven by the FreeNAS GUI).
This is how the current setup looks like in the house. I can post some photos up of the work done.
So, why use a t730 instead of, say, a NUC, or a Microserver Gen 8 or 10? And why de-couple the storage from the processing?
- It's cheap. Did I mention that you can pick one up on eBay for about $200, and there are quite a few available? Most thin clients have practically no resale value on the secondary market whatsoever as it is a bit of a niche device. If I choose to upgrade the ESXi environment later on, well, I still have a powerful quad screen thin client, which can be repurposed for, gosh, anything. Emulate a Playstation 4 with it one day? (no, just kidding. Not enough GPU horsepower, but definitely okay for PPSSPP). Add a pair of Mellanox Connect x4s, Chelsio Terminator T420s or SolarFlare SFN-5122Fs (remember to specify low form factor for the bracket), it'll probably be an extra $60 or so. Add a pair of generic SFP+ DAC cables and you'll be done at around $300. The t730 is usually shipped to you ready-to-go with 8GB RAM and a small SSD. If you buy a NUC you usually have to buy some extras - even for an older bundle with a Haswell or Broadwell i5/7, it's still going to be $400 at the very least. Besides, how many $200, quad DisplayPort capable NUCs with dual 10GbE networking support do you know of?
- The thin client is absolutely silent, and have no problems with the Solarflare SFN5122F inside (the Flareon will lock up the t730 after 10 minutes of use regardless of which OS was being used...my guess is that it is just too power hungry or runs too hot). The Flareon or the 5122F have no issues inside the N40L Microserver, and performance seems decent enough. Power consumption is quite low, around 30w in most cases.
- The performance is actually pretty good. CPUMark for the RX427BB is 4227, which is not too far from the 4420 CPUMark quoted for the Xeon E3-1220Lv2 found in some HP Microserver Gen 8s. It's definitely faster than the G1610T/G2020 on the base model Gen 8s, or oddly enough, the Opteron X3216 on the recently released Gen 10 Microservers. For the amount HPe charges for a Gen10, you are better off buying a t730 and keep the Gen 7 running as a dedicated NAS. As for competing against NUCs, it's not that bad for the price point as well - it's roughly the same performance of an i5-6260U (4366 CPUMark), but unlike the NUC, you can get 10GbE networking running on it.
- The machine has an official 16GB RAM ceiling, but 32GB will work as long as it's dualrank per unit (the SoC can only support quad ranks in total). This is analogous with most Broadwell and newer NUCs. A bigger issue though is validating its IOMMU for SR-IOV or VMDirectPath - in its existing form the BIOS does not support stuff like ARI (alternative routing ID) or PCIe ACS (access control services) nor does it allow enough MMIO space to handle VMDirectPath, even when the hardware is more than capable of handling it.
- Once you have the HP Microserver Gen7 configured as a 10GbE iSCSI box, well, you can reuse it again for the next upgrade. If I ever get around to that $2000 SuperMicro Xeon-D 1541 box (5028-TNT4), I can still use the Gen 7 as an attached NAS. I hate the idea of sending nominally useful hardware to the landfill, and this little black box has been with me since 2011 - No e-waste is allowed here. My only misgiving is that the N40L is still a 4 bay device. In theory I could use a custom caddy on the optical storage bay to add a few 2.5" 7mm SATA SSDs, but the resultant credit statement will earn me the evil eye from the missus (AKA the one who shall be obeyed).
Last edited: