Opinions wanted: OS choice, and CPU power versus potential RAM?

Xeon E3-v6 or Xeon D 1521?

  • E3-v6

    Votes: 2 66.7%
  • Xeon D 1521

    Votes: 1 33.3%

  • Total voters
    3
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jppowers

New Member
May 25, 2017
10
2
3
38
Hello STH community.

I'm currently spec'ing out a new NAS/home server, and I'm now stuck making a decision between more CPU versus higher potential RAM...

For some background, My current build is a ZFS on Linux of four 4TB Seagate NAS drives in zraid1 (I know, I regret it but I wanted maximum space) and it's been running strong for about 3 years now. Originally it was a Linux desktop/home server but has since been relegated to just a server. I'm down to barely 1TB free space and will be pulling the trigger on a new build soon.

The NAS acts as network storage, media server (Plex that's shared to ~10 friends and likely to grow whenever I finally get gigabit fiber in the neighborhood), Blu-Ray backup device (MakeMKV then Handbrake), and in the future I'd like to setup a single VM environment for a testing environment before I push changes to my VPS, so CPU is more important than it might be for some building similar NAS devices.

This time around I'm going to go far more "purpose built server" than I did last time. I'm looking at Xeon options to get ECC, getting an LSI9207 HBA and a chassis with 8 hot swap drives, which will be populated by eight 6TB drives in zraid2. I've got most of my options decided on, but I'm currently debating between a Xeon D board or a C236 board with a Xeon E3 v6 (probably 1240).

So, my question for all of you STH folks:
  • Hardware: Xeon D 1521 based board or a Xeon E3-1240 v6.
    • The price delta between the motherboard/CPU combos is negligible at $50. Really the decision is coming down to CPU power (Xeon E3) versus higher potential RAM in the future (Xeon D). Initially I'm going to 64GB for either, but the potential to grow later is tempting.
    • My dilemma comes from the "tried and true" thought that you want 1GB RAM for every 1TB of ZFS storage. My goal is that as data evaporates I'll start replacing the 6TB drives with larger drives (and a butt load of waiting for resilvering). At 64GB RAM, I don't have to worry about this even with eight 8TB drives, but if by the time I get to the point of replacing drives I want to go to 10TB drives... I'm looking at an all new rig all over again if I go for a Xeon E3. That said, the Xeon D 1521 would net me a potential 128GB RAM, but is down on power compared to the E3's by a good margin. With seemingly no announcements happening in the near future of a Xeon D second generation I'm concerned it won't hold up. My current machine does run at 100% a good bit and the new one will be stressed even more so I do want the power.
    • CPU power versus RAM also affects my VM hopes, as more RAM total means I won't be "stealing" from ZFS if I decide to roll a VM with a butt load of RAM. That said, core counts are the same but single thread performance is far higher on the Xeon E3.
All that said, I'm looking at far more space this time around out of the gate so there's also the potential that by the time I'd need to upgrade my capacity in the future I'd have built an all new rig anyways. ~12TB lasted me 3 years, ~32TB's would probably last even longer...

So, I turn to the combined decades of experience that's here. Thoughts on more CPU power now versus more RAM later?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
If your current AMD is 100% maybe you need an e5 cpu ?
Some Xeon-d boards have inbuilt LSI 2116 at a cheap price, performance on that I am not sure.
Both options will be power efficient, if you do go Xeon-d then rdimm is usable on an e5 if you need to upgrade later ;)
For zfs even with the biggest drives 64gb is a lot of ram !
 

jppowers

New Member
May 25, 2017
10
2
3
38
If your current AMD is 100% maybe you need an e5 cpu ?
Some Xeon-d boards have inbuilt LSI 2116 at a cheap price, performance on that I am not sure.
Both options will be power efficient, if you do go Xeon-d then rdimm is usable on an e5 if you need to upgrade later ;)
For zfs even with the biggest drives 64gb is a lot of ram !
When the AMD hits 100% it's for an "understandable" reason. Multiple plex transcodes, for instance. It's not bad, just enough that a bit more CPU oomph would be appreciated. I did consider going to E5 but I'm being a bit picky about things. I really want to go with all new hardware and stay away from anything used, and the price jump to E5 combined with the TDP increase is keeping me away.

I did check out some of the Xeon-D options with various LSI chips but the price delta for those is kind of ridiculous. Specifically, most of the manufacturers are only including those with CPU's I don't want.

As for so much RAM... it's mostly just because I want to knock out any potential growth problems now. I'd prefer leaving it so if/when I need to do upgrades it's just the HDD's as that's about half the cost anyways.
 

gzorn

Member
Jan 10, 2017
76
14
8
I've got almost the exact system setup (though still in testing) xeon e3-1240, maxed the ram, lsi 9207, but smaller drives (3-4TB).
Just remember with raidz that you'll need to replace all of the drives with the larger variant before you actually see the increased available size.
I like the setup, though if I had to do it again, I might have gone with an e5 for better virtualization features (PCIe passthrough for all expansion cards). In order for SR-IOV to work, my 10g-ethernet card is plugged into a smaller pcie slot that runs through the chipset.
e5 supports a larger RAM size, too, but the power consumption is higher. What's the most important factor for you? You're being pulled in several directions.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I am currently running one server with xeon d 1518 and one with e3 v6. Both are AIOs. The xeon d has 16 4tb drives and 16gb ram, 2 vcpu. The E3 has 12 8tb drives 6gb ram and 2 vcpu.
Both saturate 1GBe networks. I don't see any bottlenecks with my current setup. they are pure media and file servers and I don't use them for any vm storage.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I dont have a the combined power utilization for the E3 system as the drives are in a JBOD but the xeon D averages around 130w.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
There is no strict relation in ZFS for needed RAM vs pool size,
Oracle with Solaris, the origin of ZFS states a minimum of 2 GB for Solaris does not matter a poolsize. This is enough for stable operation of Solaris.

But as ZFS spreads its datablocks all over a pool even with a sequential videostream, your pool performance depends more on iops than on the sequential disk performance. This is where RAM counts as ZFS uses all available free RAM for caching of metadata and small random reads. This means more RAM = more performance. If you count 1% of all data=metadata, you are at the 1GB per TB rule for active data if you want to cache at least all metadata. Some workloads ex databases or mail would like even a lot more than 1% as you want to cache data as well. Additionally ZFS use per default up to 4 GB RAM as write cache.

As even Open-ZFS is still based on Solaris regarding memory management, the RAM needs on Solaris or its free forks are a little lower than on other platforms what means often faster. In general I would not go below 4GB with 8GB as a good minimum. More depends on use patterns. With a mainly video use case, I would add a fast NVMe or SSD as a L2Arc device. This is mainly because you can use the L2Arc to cache/read ahead sequential data (you must enable this feature). The size of the L2Arc should not exceed 5 - 10 x RAM as you need RAM to organise the SSD cache.

So yes, use as much RAM as you can use or afford if you want a fast system. With low RAM ZFS is slow or slower than older/non CopyOnWrite filesystems.

Regarding OS my preference is Oracle Solaris and its free forks as integration of OS, ZFS and services is best. And RAM is more relevant than CPU power where frequency is more relevant than number of cores.
 
Last edited:

jppowers

New Member
May 25, 2017
10
2
3
38
The whole reason I'm asking this is a nagging fear that RAM might be more important to me in the long run than I'd otherwise expect, but it sounds more and more like that's incorrect and I'd be fine with 64GB for quite some time.

I've got almost the exact system setup (though still in testing) xeon e3-1240, maxed the ram, lsi 9207, but smaller drives (3-4TB).
Just remember with raidz that you'll need to replace all of the drives with the larger variant before you actually see the increased available size.
I like the setup, though if I had to do it again, I might have gone with an e5 for better virtualization features (PCIe passthrough for all expansion cards). In order for SR-IOV to work, my 10g-ethernet card is plugged into a smaller pcie slot that runs through the chipset.
e5 supports a larger RAM size, too, but the power consumption is higher. What's the most important factor for you? You're being pulled in several directions.
Oh, I'm aware of the need to replace all drives with raidz. It's the reason I've also waffled on maybe using an alternative to ZFS, such as unRAID. I've decided I'll stick with ZFS on Linux and just deal with that cost when I get there.

Ultimately I don't plan on doing much virtualization, probably just one VM that will somewhat mimic my DigitalOcean VPS so I can do some basic testing of things before pushing it live. I might experiment with Docker a bit but I don't expect much.

At the end of the day my real goal is maximizing performance in a low TDP envelope. I think I'm going Xeon E3v6.

There is no strict relation in ZFS for needed RAM vs pool size,
Oracle with Solaris, the origin of ZFS states a minimum of 2 GB for Solaris does not matter a poolsize. This is enough for stable operation of Solaris.

But as ZFS spreads its datablocks all over a pool even with a sequential videostream, your pool performance depends more on iops than on the sequential disk performance. This is where RAM counts as ZFS uses all available free RAM for caching of metadata and small random reads. This means more RAM = more performance. If you count 1% of all data=metadata, you are at the 1GB per TB rule for active data if you want to cache at least all metadata. Some workloads ex databases or mail would like even a lot more than 1% as you want to cache data as well. Additionally ZFS use per default up to 4 GB RAM as write cache.

As even Open-ZFS is still based on Solaris regarding memory management, the RAM needs on Solaris or its free forks are a little lower than on other platforms what means often faster. In general I would not go below 4GB with 8GB as a good minimum. More depends on use patterns. With a mainly video use case, I would add a fast NVMe or SSD as a L2Arc device. This is mainly because you can use the L2Arc to cache/read ahead sequential data (you must enable this feature). The size of the L2Arc should not exceed 5 - 10 x RAM as you need RAM to organise the SSD cache.

So yes, use as much RAM as you can use or afford if you want a fast system. With low RAM ZFS is slow or slower than older/non CopyOnWrite filesystems.

Regarding OS my preference is Oracle Solaris and its free forks as integration of OS, ZFS and services is best. And RAM is more relevant than CPU power where frequency is more relevant than number of cores.
This is really helpful, thanks. Part of the plan/reason I'm getting a separate LSI HBI is so I'll have the onboard SATA available to use for a L2Arc and the M.2 for the OS drive. I have a Samsung 950 Pro 512GB in my gaming desktop that will be replaced with a 960 Pro at some point in the near future. I've only done cursory research into it, but from what I could tell it's not difficult to setup an L2Arc in an "after the fact" manner, correct?