Unneeded home server upgrade I don't need but want to do

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
I'm starting new thread to avoid unnecessary discussions that only make everything more complicated.
But first of all a disclaimer:
This is something I do not need AT ALL, it's purely a combination of "I'm an idiot and have an upgrade itch" and "I like to mess with crap and tweak it for no real gains", so please accept that and help me (if you want) achieve exactly what I want without suggesting doing something completely different unless what I want to do wouldn't work or there is much effective way (like pointing out I do not need to use a HBA with NVMe drives which I had no idea about).

There is a small home server sitting on a small rack in the corner of the living room.
It runs ESXi (which I'm going to replace with XCP-ng when I find the mental strength to mess around with Linux stuff I know absolute friggin' zero about; installing the server back then took me solid several weeks of trial and error, frustrated questions, bugging people and googling around until everything worked flawlessly whereas it would be a matter of few hours for someone who knows what is he doing) and originally served as all in one box that also ran virtualized pfSense, which eventually turned out to be a really bad idea under certain circumstances, so I replaced that with standalone device. The server was also supposed to run more things like some game server to host coop sessions for various games, but that never materialized, so it still only runs TrueNAS and an Ubuntu server seedbox, barely doing anything.
The NAS is used to store illicit content (music, films) and regular daily backups from about three computers. It sees very little writes, I'd download few hundreds of GBs of stuff once in a blue moon, the backups are few tens of GBs a month or so, so all in all, I guess endurance is not really a concern for the storage.
the specs:
Supermicro X11SCH-F motherboard
Xeon E-2136 (I believe, the hexacore basically)
Intel X710 10Gb NIC (mental note: I need to upgrade my PC in order to be able to put the 10Gb Mellanox card back so this thing makes sense again)
LSI HBA (9305-xx)
4x Samsung PM883 SSDs connected to the HBA and passed through to the TrueNAS VM in RAIDz1 configuration.

I had an upgrade itch of sorts recently, but that quickly changed and/or morphed in a slightly different way after I was corrected about one major oversight I wasn't aware of (the HBA thing). I'm not going to elaborate on that and will instead try to describe what I'd like to do right now instead.

Again, I preface this by saying there is absolutely nothing wrong with the server performance-wise, it's actually quite an overkill for its current purpose.

Mainly I would like to replace the SSDs with NVMe ones for a slight speed boost when copying large files, because I am an extremely impatient person. Those full backups take too long at times and were the reason I put 10Gb card in the server (and my PC obviously) at some point. If I wasn't such an impatient idiot, I could have used Mini-ITX board and put the thing in much smaller case instead, which is something I've been thinking about for a long time, because the Fractal Node 804 case is just big)
Problem: The motherboard does not support bifurcation. Some people claimed it did, but there is no such option in the BIOS, not a word in the manual, and I am pretty certain this particular model simply cannot do it. I'd have to upgrade the server first, which I want to do anyway. The MB also only supports PCIe 3.0 and there's the x710 card so I'm sure I wouldn't have enough lanes anyway.

I also want to lower the server's power consumption.
In its current form it idles at around 110W, and I'm sure a modern Ryzen 5-based (that's what I'm very preliminarily thinking about) platform would use a lot less.
Getting rid of the HBA is at least 10W in itself, but the upgrade (which will have to happen for the SSDs anyway) will do a lot more. But everything counts. Of course, there is the question of power consumption of SATA vs NVMe SSD, but I don't think there should be a huge difference IF I don't buy enterprise-class stuff (I might if it makes sense for various reasons like price, endurance, blabla).

So for now, maybe lets focus on the SSDs?
Disregarding the question of how to physically connect them (assuming neither the motherboards I have in my mind for the upgrade don't support bifurcation and having to use some sort of a lanes splitter adapter or something), I guess I could get away with any desktop junk with the little writes the NAS sees, but I don't want to buy complete crap either.
At the very least, the SSDs should have consistent sustained speeds. They can be PCIe 3.0 for all I care, the speed boost over SATA would be massive anyway.
I would prefer if they weren't too power hungry though.
I took a look at our biggest eshop with electronics and filtered out some of the 2TB M.2 ones, and there's WD Red SN700 that is supposedly designed for NAS usage. It has huge endurange too (irrelevant, but reassuring) and good sustained speeds according to the STH review. Not terribly expensive at ~€140 either. Let's call that the maximum I'm willing to spend on one. It doesn't say what the power consumption is, unfortunately.
I don't mind buying used drives if the source is reputable though. I don't think enterprise drives are the way to go because I heard U.2 ones get really, really hot (probably use a lot of power too), and I would like to keep the server as quiet as possible (=I am apprehensive about adding extra fans inside).

So, thoughts?
 
  • Like
Reactions: abq

986box

Active Member
Oct 14, 2017
272
51
28
45
Subscribed. Watching this as I have X11SCL-F with E2246 and LSI 9300. Usage is 150W with 2 switches and cams.

My consideration is H13SAE and eypc 4004 plus LSI 9500. Prices for Intel is too expensive. Not sure if Asrack B650D4U issues are resolved.
 

kapone

Well-Known Member
May 23, 2015
1,857
1,245
113
The X11SCH-F has:

- Two m.2 slots running at PCIE 3.0x4, but...connected to the PCH. And the PCH has DMI3.0 (8gbpsx4lanes) i.e. ~3.9GBps of bandwidth
- Two x8 slots running at PCIE 3.0x8 each and can potentially be split into four PCIE 3.0x4 (No native bifurcation support, so...AICs that implement bifurcation and then SSDs plugged into them.)

- Now...one of those x8 slots will be taken up by your NIC...or will it? I'd much rather get an m.2 to pci-e convertor and put the NIC there. Why? PCI-E 3.0x4 is plenty for 10g. Why waste an x8 slot?

So...without any HBA, you can have up to five pcie 3.0x4 NVME (or whatever) SSDs in there and a 10g NIC. I think that's plenty of slots?

I'm not sure why your system is running at 110w though. That seems WAY too high for your hardware. Your four SSDs idle at <2w each and the X710 is really not that power hungry. What's consuming all that power??
 

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
Nice math! This is outside of my knowledge.
Oh wait, I forgot to add there are two M.2 SSD plugged in the motherboard where the ESXi is (one for the OS and one for the VMs; I liked them separated just in case), but I plan to reduce it to just one once I reinstall the thing with XCP-ng. Would that still work?

No native bifurcation support, so...AICs that implement bifurcation and then SSDs plugged into them.
I presume this would be in the form of a x8 card with four M.2 slots?

I'd much rather get an m.2 to pci-e convertor and put the NIC there.
Where would I plug the NIC though? It kind of has to slot into the MB, doesn't it?

I'm not sure why your system is running at 110w though. That seems WAY too high for your hardware.
I don't know. I'd have to pull out the power meter and try again, maybe I misremembered? I'll check it out later.
 

kapone

Well-Known Member
May 23, 2015
1,857
1,245
113
there are two M.2 SSD plugged in the motherboard where the ESXi is
Those two SSDs probably don't make a meaningful difference....but....ESXI....that may explain it. Unless you've tuned it...it uses performance profiles out of the box, which may explain it.

I presume this would be in the form of a x8 card with four M.2 slots?
two m.2 slots per AIC

Where would I plug the NIC though? It kind of has to slot into the MB, doesn't it?
No. It all kinda depends on your case and/or DIY skills. It looks like this:


You'd have to figure out how to mount it...

maybe I misremembered
I sort of think so. That is WAY too high. My SAN nodes that use old ass Xeon V2 processors (e5-2667 v2) with two of them, 256GB ram and 8x SATA SSDs are ~90w. Yours should be quite a bit less.
 

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
I was suspecting something was off. The server idles (that is, after ESXi has fully booted up) at 48W. So much for lowering power consumption I guess?
I think I must have mistook it for the time when I was curious about the power consumption of everything connected to the rack UPS though. That sounds more like it: the server, the 24-port switch, the CWWK router, and a PoE injector feeding one or two Ruckus APs. That sounds more like 100W.

Anyway.
No. It all kinda depends on your case and/or DIY skills. It looks like this:

Pardon Our Interruption...
You'd have to figure out how to mount it...
That wouldn't work. The x710 card is x8... Look.
 
  • Like
Reactions: kapone

nexox

Well-Known Member
May 3, 2023
1,949
968
113
I don't think enterprise drives are the way to go because I heard U.2 ones get really, really hot
Some get hot, others aren't so bad, generally the sort you're looking for which don't feature super-high write performance run cooler. I have a set of four sitting in front of a quiet 140mm fan with a bit of space around each and they're running nice and cool.

That wouldn't work. The x710 card is x8... Look.
That listing doesn't show the right angle of the PCIe connector to see for sure, but they often use an open-backed slot so an x8 card will fit, If that one doesn't you should be able to find one that does. A 10G NIC certainly doesn't need the bandwidth of a 3.0x8 slot, even with two ports, but I have heard some Intel cards don't behave properly when they don't get the full set of lanes.
 

kapone

Well-Known Member
May 23, 2015
1,857
1,245
113
I was curious about the power consumption of everything connected to the rack UPS though
I don't even look at my Grafana console....Let's just say the power bill for my rack is....around four digits...per month....:confused:
 
  • Like
Reactions: nexox

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
I am not sure I would choose XCP-ng unless there was a very specific reason for it.
Why? At least it's free, unlike ESXi (where you even need an account to download patches). And I couldn't find any negative reviews for it.

C'mon dude.... :)

I still wouldn't be able to mount it anywhere, and I am not that kind of tinkerer to cut holes into a case and... basically do really funky stuff. I can't even imagine where the card could be.
 

TrevorH

Member
Oct 25, 2024
95
36
18
Why? At least it's free, unlike ESXi (where you even need an account to download patches). And I couldn't find any negative reviews for it.
There are 2 main linux based virtualization mechanisms, KVM and Xen. KVM is built into the linux kernel and is enabled on most linux distros and is the default choice for most people that run VMs under linux. I suspect the number of KVM installs outnumbers Xen ones by at factor of at least 10:1, probably more like 100:1. When I last used xen back in ~2015(?) we ran performance tests and a KVM based VM on the same hardware was roughly 15% faster. That's one, very old, data point.

Did you look at proxmox which is another linux based VM solution? It uses KVM behind the scenes not Xen.
 
  • Like
Reactions: nexox

kapone

Well-Known Member
May 23, 2015
1,857
1,245
113
I still wouldn't be able to mount it anywhere, and I am not that kind of tinkerer to cut holes into a case and... basically do really funky stuff. I can't even imagine where the card could be.
lol. That’s fair.
 

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
There are 2 main linux based virtualization mechanisms, KVM and Xen. KVM is built into the linux kernel and is enabled on most linux distros and is the default choice for most people that run VMs under linux. I suspect the number of KVM installs outnumbers Xen ones by at factor of at least 10:1, probably more like 100:1. When I last used xen back in ~2015(?) we ran performance tests and a KVM based VM on the same hardware was roughly 15% faster. That's one, very old, data point.

Did you look at proxmox which is another linux based VM solution? It uses KVM behind the scenes not Xen.
This is an area I really don't have the slightest idea about, so I'm just going off what I googled up last year when I was thinking what to replace ESXi with (I got fed up with the way you have to go about getting the system updated if you pirate it, which I believe everyone who uses it as a standalone user does with the price tag). Fortunately I haven't had to do that in a long time because the server is still on ESXi 6.7, lol. But I just got annoyed by the whole thing.
Anyway, I vaguely remember reading something about Proxmox vs XCP-ng, and I believe the argument for XCP was it was bare metal hypervisor just like ESXi? I really do not remember most of what I read, but I just had it in my notes to just look into XCP once the new major version (8.4 I believe?) is out (which happened a few months ago IIRC).
 

TrevorH

Member
Oct 25, 2024
95
36
18
I believe the argument for XCP was it was bare metal hypervisor just like ESXi?
I wouldn't concentrate on that, their performance is pretty much the same so questions of what type of hypervisor don't really affect what you want them to do, they just do it. My own opinion is that xen is probably a dead end but that it might take a long time to die,
 

alaricljs

Active Member
Jun 16, 2023
277
126
43
+ Vote for proxmox. In my employ we're switching from ESX to PVE after doing serious research. XCPng just wasn't good enough. And PVE has more users meaning more community support.

On the m.2 -> PCIe, there are versions with the slot directly on the m.2 board that might line up with the case slots for that one on the bottom of your mobo. I'll dig up my AliExpress link tomorrow.
 

Octopuss

Active Member
Jun 30, 2019
601
121
43
Czech republic
I wouldn't concentrate on that, their performance is pretty much the same so questions of what type of hypervisor don't really affect what you want them to do, they just do it. My own opinion is that xen is probably a dead end but that it might take a long time to die,
Why would it be dead? It's in active development, the team is clearly big, the forums are very active... what am I missing?
I get it you prefer Xen, but for home use, is there even any difference? Sure, with enterprise use case things might be different, but this is just a lousy home box, heh.
 

homeserver78

Active Member
Nov 7, 2023
102
59
28
Sweden
Mainly I would like to replace the SSDs with NVMe ones for a slight speed boost when copying large files, because I am an extremely impatient person.
Depending on how your zpools are set up you might not see any speed boost with NVMes:

10GbE => 10 Gbit/s => 1.25 GB/s
SATA => 6 Gbit/s => 0.75 GB/s (say 0.5 GB/s per SATA SSD to be more realistic)

So with four drives in raidz1, which gives streaming read and write speed of [number of data drives in the pool] times [speed of the slowest drive], you're likely already limited by the network. (Or at least not by the drive interface.)

No native bifurcation support, so...AICs that implement bifurcation and then SSDs plugged into them.
two m.2 slots per AIC
No such thing as an AIC that implements bifurcation. You either have UEFI support for bifurcation or you need to use a PCIe switch card -- and the latter can certainly be four M.2 slots on an x8 AIC (random example: NV9524-4I).