HP Z840 Upgrades or new build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

understandingdata

New Member
Feb 12, 2024
3
0
1
I recently bought a z840 and added a few sas drives, 256 gb ddr4 and 2 xeon e5-2697a for a proxmox node that would act as main server in my home network, but also for small business applications. I'm running into storage issues, my 2x2 mirror hdd pool is nearing 70% and my pcie bifurcation on nvme drives is not what i was hoping for. It seems im already reaching the limits of this machine and I will need alot more in the coming time.

I'm looking for options on how to either expand my current storage or what would be good for a more scalable build
Main use case:
- 10-20 vm's
- fast databases
- fast networking
- truenas node with >100tb storage
- experimenting with k3s / k8s

I currently have lots of cores and lots of ram, and the performance is ok, but I'm open to upgrade the system altogether. I'm interested in hearing other more modern, scalable options. Ideally I have a server that I can use as a workhorse while also tinkering that is good enough until 2030. The H13SSL looks interesting and future proof?
 

zachj

Active Member
Apr 17, 2019
161
106
43
I think the z840 will be a dinosaur in 2030…the newest cpu it supports came out in what? 2015? Hard to imagine you’ll be happy with a 15 year old server.

you absolutely can stuff 16x8tb sata ssds into the front drive bays using a 5.25” drive cage. I currently have 16x1tb in such a cage and it works fine. You just need an hba…

if you want nvme take a look at the OWC U2 Shuttle. You can pop in quad m.2 drives behind a pcie bridge in a standard u.2 form factor. Four of those attached to a single x16 bifurcation card would get you 16 nvme drives.

in my humble opinion the hp z6 g4 would be a great upgrade path and it’s already really cheap.

id say keep what you have until it doesn’t work and then see what else is out there.

what specific problem are you having with your storage and bifurcation?
 
  • Like
Reactions: Zedicus

Zedicus

Member
Jul 12, 2018
52
21
8
If you wanted to go the EPYC route, i would go Siena 8000 series. All the percs of the 9000 stuff but lower power (and a little less core performance). But in most edge server use case, RAM and storage are more important, so the 8000 series would be the winner.

The HP z6 g4 is honestly not a bad option for the cost, though.
 

louie1961

Active Member
May 15, 2023
171
69
28
I have a Z640 as a Proxmox node. I think you will be just fine with that setup. My only beef with it is power consumption. Mine idles at like 70 watts. Here's what I did to max out storage:
1. I put in one of these Icy dock 4 bay racks that fits into one of the 5.25 external drive bays https://www.amazon.com/gp/product/B09NWL61TF/
2. I also put in an Asus Hyper M.2 NVMe expansion card https://www.amazon.com/ASUS-M-2-X16-Expansion-Card/dp/B084HMHGSP

The mother board on my Z640 has 6 SATA connectors. I used two spinning drives in in internal drive bays and 4 SSDs in the Icy dock, plus the M.2 NVME drives. On top of that I have an external NAS which serves up an NFS share to this proxmox host for backups and slow storage (ISOs and container templates). I mirrored two SATA SSDs as the boot drive, I used the NVMe drives for the VM storage, and I used the spinning drives as pass through for a TrueNAS VM.

You should have 8 SATA/SAS ports on the Z840, so you could always use a different icy dock arrangement to add more 2.5 inch bays. Plus you have tons of PCIe slots on that machine so adding a second HBA is always an option. Likewise you can always add a 10GB (or higher) NIC to that board as well.
 

understandingdata

New Member
Feb 12, 2024
3
0
1
Thanks for the replies.

I'm having speed issues, they might also be related to how I set things up in my node:
  • proxmox as hypervisor, with pcie enterprise ssd for root fs
  • truenas vm with
    • slow pool: the internal hba flashed in it mode and passed through with 2x mirror vdev of large enterprise hdd
    • fast pool: pcie nvme bifurcation card passed through with 1x mirror vdev of samsung 980's
  • ubuntu vm with a bunch of containers that mount nfs shares from the truenas vm. all of the container data resides in the fast pool through NFS.
I wanted to a) disconnect compute from storage and b) create data safety with replication and snapshots, but it turns out to be very slow.
The system has an internal dual 10GBe. iperf3 hits maximum targets bidirectionally (probably because its loaded in memory?) but when you copy something from within the truenas host its 10-25x faster when compared from the ubuntu vm.
Any insight in how I could improve/debug this is highly appreciated.

I'm also having size-of-storage issues. I have filled up the 4 internal bays with sas drives in mirror configuration giving me only 36TB (2x2x18tb) slow storage. All of the containers data (appdata also) lives on the fast pool (nvme), and there's only expansion for 2 more nvme. It's not alot. Thanks for the suggestions for 4x2.5" in the 5.25" drive bay. I also found a 3.5" variant that takes 3x3.5" into the 2x5.25" slots. Will consider both, especially with ssd becoming sizeable. Is there an easy way to expand externally without going into the rack domain?

My usecase will expand in the future, and I want to be ready. I was mainly considering the HxxSSL because it seems like a really scalable system is rack-size anyway, and this would mean I could take a server case with e.g. 12x3.5" slots.
 

louie1961

Active Member
May 15, 2023
171
69
28
If you are having speed issues, I would look at networking first. Make sure you are not being limited by data going out to a slower switch and or firewall/router. If your VMs are on the same VLAN as your TrueNAS, I would create a bridge and a virtual network switch for them inside of Proxmox, so the packets don't have to leave the physical box.
 

louie1961

Active Member
May 15, 2023
171
69
28
My usecase will expand in the future, and I want to be ready. I was mainly considering the HxxSSL because it seems like a really scalable system is rack-size anyway, and this would mean I could take a server case with e.g. 12x3.5" slots.
Honestly I think you are limited by your case only, unless or until you know you are limited by the CPU speed. That motherboard has plenty of slots and ports. It may be cheaper for you to swap the board into a different case with more storage capacity.

The other thing to consider is switching from a mirror arrangement to a raid 5 or raid Z1. You will get more capacity and more speed. Especially if you can dedicate an SSD or NVME drive as a cache drive.
 

understandingdata

New Member
Feb 12, 2024
3
0
1
I will try to hoard less and figure out my current limitations :D
The vms are on the same device: truenas / ubuntu server run on the same node with a 1gbe internal nic.

I have considered and tested: Slow NFS but SMB and local access are fast
--> describes some tuning in truenas, which had no real effect

The speed of the NFS share is terrible. Iperf3 shows 26gbit which is out of control fast, but rsync alot of small files (e.g. db's ) drops to 10 (!!) mb /s. For streaming large files the system is fast enough. I thought adding an nvme pcie card would give me excellent speed for databases and small files but I think I am now bottlenecked by the NFS (1gbe) share.

I am not sure what to check configuration wise to see if I made any silly mistakes, can you braindump what you think?