planned build: critiscism welcome - server + workstation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lihp

Active Member
Jan 2, 2021
186
53
28
Goal
  • Good and fast Windows Workstation environment for graphics (full Adobe Suite + 3D render)
  • Linux development and test environment for Web and services

Current system and issues
Status:
  • several servers spread over the internet (openBSD, CentOS)
  • workstation (almost an oldtimer by now) with local files
  • way too old local server (unfitting and for the museum)
Issues:
  • no local test environment
  • no local staging environment
  • slow local file access for workstation
  • subpar backup
  • no proper scalability
Idea
Proper server environment starting with 1 server asap with the following tasks for the initial server:
  • File server - probably with XFS locally (performance) and shared by either NFS or Samba (didnt test Samba direct yet).
  • iSer server for the Windows workstation so I can use diskless workstations via EDR IB (I am quite sure a current Epyc server can saturate 100G, question is if I sized it correctly)
  • enable easy multiboot and multiOS installation via iSer for the Workstation
  • decent backup and daily snapshot of workstation on the server (since the drive is local to the server)
  • seperate storage Backup (encrypted Borg backup) to the spinning disks
  • archiving of backups to Glacier storage (so I can even rollback to very old data)
  • Optionally a Windows Server 2019 in KVM, which I need sometimes in projects
  • Containers and maybe a KVM for testing and the like
Server will be connected via Mellanox IB EDR to the workstation. Later on if needed another server can be added and some tasks transferred to the new one.

Server
Unsure if this is the most cost efficient build (see questions):
  • Epyc 7302P (high memory bandwidth) and Epyc plain for the extreme IO for PCIe 4.0, EDR MCX card, NvME/SSD raid ...
  • 128GB RAM ECC
  • 4-8+ SSD/NvME raid1 for iSer
  • spinning disks raid 1 for Backups and archive
  • Rest Supermicro server components (like H12SSL-i or similar)
  • MCX454-ECAT
  • CentOS (hoping for Rocky Linux ;) ), optional also Windows server 2019 in KVM
Workstation
I am actually quite confident in this sizing, just unsure if I wait or go on (see questions below):
  • TR 3960 or TR Pro 3945 - 8-12 core is fine since there is no video editing only large graphics
  • 64-128 GB ECC
  • gpu
  • MCX454-ECAT
  • Windows 10 Pro Workstation or Enterprise
Questions/Thoughts
Overall I can wait until February or March. Especially considering current pricing.
  • basically the server is a SPOF, then again with backup archive on glacier I am good in worst case. With a 1.5 day downtime max.
  • normally without having to worry on costs, I would create a second server for container and virtual machines. Yet this would need some CPU ressources, like the iSer does. So putting all on one server actually makes sense.
  • I could go with the server for a 7282 by using a 2P-board, so that there is an upgrade path. Yet the 7302P is there for single core performance so its able to sature 100G. And instead of getting a 2nd processor, a 2nd server would make more sense...
  • Alternatively I could wait for next Epyc and TR release by AMD, hoping for some good 8 core Epyc or even better an inexpensive 12 core TR Pro for both server and workstation. (When may the "new" TR Pro appear, since the old ones arent even available yet?)
  • The workstation is preferably TR Pro for single core performance and to offer decent IO eg for the MCX card and several GPUs. I dont see much alternative for that TR Pro (or TR)
  • There are no PCIe 4.0 refurbished servers available yet. Any refurbished parts I see are actually more expensive than some reliable good sources for new parts (applies for Epyc Rome CPUs here, RAM, board,...)
  • I am unsure if Server RAM is too much or not enough for these tasks...
  • I am not sure on potential iSer performance over EDR - 12.5 GB/s (theoretically, so app 10+GB/s net) is not a small feat.
  • ...
Any input or thoughts welcome.

TIA
 
Last edited:

NPS

Active Member
Jan 14, 2021
147
44
28
File server - probably with XFS locally (performance) and shared by either NFS or Samba (didnt test Samba direct yet).
Is Samba direct available on server side already?

I played with NFSoRDMA using BCM57414 NICs with RoCEv2 (direct connection without switch) on Ubuntu with ZFS and found it to be slightly slower than NFS over TCP :( I definitely had very high CPU usage using a single client thread even on 25GbE. A Xeon D1518 was to slow for serving at line-speed from ARC. My Xeon W-2135 is way faster. So I am really excited about the results you might get from using proper RDMA on IB EDR on a CPU with less single-thread performance than my W-2135.

I know nothing about iSER, but are writes sync or async? You probably already know that sync write speeds are really slow even on Optane/NVRAM in general and especially remote. Do you have any write speed requirements?
 

lihp

Active Member
Jan 2, 2021
186
53
28
Is Samba direct available on server side already?
Yes, except documentation for Samba Direct on Linux is superslim. Also it needs kernel 5.14+ ideally (its experimental until then). From heresay Samba Direct should be fast, but needs testing (dunno near time since I need the server productive).

I played with NFSoRDMA using BCM57414 NICs with RoCEv2 (direct connection without switch) on Ubuntu with ZFS and found it to be slightly slower than NFS over TCP :( I definitely had very high CPU usage using a single client thread even on 25GbE.
Thanks for the heads up. First few tests showed that file serving in general puts immense stress on the CPU at 100G. Only few CPUs and fs can handle the load in case of single-threaded sync loads at 100G/EDR.

A Xeon D1518 was to slow for serving at line-speed from ARC. My Xeon W-2135 is way faster. So I am really excited about the results you might get from using proper RDMA on IB EDR on a CPU with less single-thread performance than my W-2135.
Sure thing, curious myself.

I know nothing about iSER, but are writes sync or async?
Sync. iSer is just iSCSI Extension over RDMA. Sounds alot but pretty much straightforward from HPC environments. Basically you have 3 ways to connect SAN "like a local hdd": iSER, SRP and iSCSI. In the past SRP was the way to go, but Mellanox optimized iSER alot, so it currently is superior performance-wise. And iSCSI is just iSER without RDMA - slower like 90% slower.

Yet I am unsure how I will fare at home using iSER or SRP. At work - if we do - we have perfect equipment, here... I have to see. My hope is to never ever care again for local hard disks in Workstations, but instead manage it all server-side, including auto-archiving of old projects, backup of pst-files, documents folder,... while also being able to install any additional OS should I like or need to.

You probably already know that sync write speeds are really slow even on Optane/NVRAM in general and especially remote. Do you have any write speed requirements?
There was some reddit, which "ignited" this idea. Original idea was to go ZFS, which turned out as ZFS requiring too much CPU. Yet the idea of iSER on a NvME RAID 1 array of Samsung 980 Pro (like 8x 980 Pro's) is imho gold. I give it a 75:25 chance to work out. Yet that's where I am still unsure and hope for input by anyone having more experience in it.

Worst case: plug 50% of the NvMEs from the server back into the workstation and do the rest by server shares.
 
Last edited:

NPS

Active Member
Jan 14, 2021
147
44
28
Yet the idea of iSER on a NvME RAID 1 array of Samsung 980 Pro (like 8x 980 Pro's) is imho gold.
For sync writes this will be slow. Like really slow! Local sync write performance of a single 970 Evo Plus 500GB formated as ext4 tested with fio at 1T1Q with bs=4K is about 1-2MB/s. A pair of Optane P4801X 375GB in ZFS mirror are ~100MB/s locally and ~50MB/s via NFS/25GbE. This drop should be lower with iSER I guess, but you need local performance to start with. I am not shure you had this in mind. You do not necessarily need Optane, but at least some kind of power-loss-protection in your NVMe. Your Hardware at work will have that. Samsung 980 Pro doesn't.
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
For sync writes this will be slow. Like really slow! Local sync write performance of a single 970 Evo Plus 500GB
<and more>
Bummer. I didnt think too much about small files since the 980 PRO offer less latency. Yet ofc its still far away from Optane.

One way: drop the NvMEs and invest in 2 Optane 905 or other way: go ZFS, accept reduced bw and increase server RAM.

Anyone got a different idea?
 

NPS

Active Member
Jan 14, 2021
147
44
28
I think what really would fit at least part of your expectation could be the upcoming Optane P5800X. Maybe a third tier in your storage hierarchy could help, too. You would only need a very small part of your Optanes for SLOG. Create two partitions per drive and you have one small for SLOG and the rest for the really fast storage. This is helpful if the drives that really build the pool are to slow to keep up with the Optanes write performance. I have a much smaller setup than you plan to build. For power (and money) saving purposes my main drives are 3*2TB SATA QLC SSDs in RAID-Z1. So my main pool is limited to ~450MB/s write at higher blocksizes. But my Optane-pool reaches ~900MB/s. These numbers are for NFS/25GbE but locally they wouldn't be much higher. Read speed is hard to say because pretty much everything is in ARC and thus I reach line-speed most of the time.
So I guess all depends on your specific usage patterns, if you don't want to massively overbuild. How much fast storage do you need? How much do you move per day? How much will you read, that is likely not in ARC? Maybe a setup of 2*P5800X, 2*NVMe with PLP and some HDDs may be your personal sweetspot? Depeding on your usage patterns you could even add a simple 500GB NVMe as L2ARC especially for the HDD pool.
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
I think what really would fit at least part of your expectation could be the upcoming Optane P5800X.
<snip>
Just looked at work ;). Seems they use the Raidix solution with DC SN640 drives (U.2). Not sure if I looked at a test server though. Still looks like a plan. Also the SN640 arent expensive at all. Just for one thing:

HOW THE F**** do I connect 4 U.2 drives to a Supermicro H122SSL-i or H12SSL-C board (without buying controllers for 300$ for 4 ports)?
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
SlimSAS to U.2 cable gets you the first 2 drives. You would then need 2 x M.2 to U.2 adapters for the other 2 drives.

If you get a H12SSL-NT, you could go to 6 drives without using the PCIe slots since it has 2 x SlimSAS ports (each good for 2 NVMe drives).
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
SlimSAS to U.2 cable gets you the first 2 drives. You would then need 2 x M.2 to U.2 adapters for the other 2 drives.

If you get a H12SSL-NT, you could go to 6 drives without using the PCIe slots since it has 2 x SlimSAS ports (each good for 2 NVMe drives).
Thank you so much!
 

lihp

Active Member
Jan 2, 2021
186
53
28
If you get a H12SSL-NT, you could go to 6 drives without using the PCIe slots since it has 2 x SlimSAS ports (each good for 2 NVMe drives).
Figure you mean the H12SSL-CT

Ahh sorry now I see - its the NT.
 

NPS

Active Member
Jan 14, 2021
147
44
28
I am watching the H12SSL-i for about a year now (undecided if I really wanna invest in a EPYC compute-server) and found hints of a cable named CBL-SAST-0953 but I really don't know where you can buy one... But beware, the NVMe cabling is quite an expensive business. Loads of connectors to decide from but hard to find the perfect combination and most of the stuff is hard to find and not the price I expected for a "simple cable".
Another interesting solution are simple PCIe cards without switch (so they require bifurcation but most modern server boards have that) like this one: Delock Produkte 89030 Delock PCI Express x16 Karte zu 4 x intern SFF-8654 4i NVMe - Bifurcation Same problem: find the cables for a good price.
These cable problems leed to the situation that m.2 is much easier and most of the time substantially less expensive. But the U.2 drives have their advantages in performance, capacity, ease of cooling... :/
 

lihp

Active Member
Jan 2, 2021
186
53
28
@BlueFox and @NPS thank you for the input. I did my homework and so far my final build with comments:

Actually I am quite content with the outcome for a high performance file+backup+[options] server. With 4x WD SN640 drives I should hit 2 Million IOPS (4K) up to 4 Million IOPS (8 drives), which roughly translates to total 8 GB/s (4 drives) up to 16GB/s (8 drives) file server single-threaded throughput for the fast storage (of course less when counting net throughput). :D

Epyc Server:
  • CPU: AMD Epyc 7262 or 7302P. Alternatively TR Pro "WEPYC" 3955WX (hesitant to maybe wait some more for Milan CPUs).
    Main reason for the CPU is Epyc due to the amount of PCIe lanes ensuring future upgrades. 7262 or 7302P due to CPU architecure and price. Both offer 204.8 GB/s memory bandwidth per socket. CPUs like the 7232P, 7252, 7282 and similar only have 85.3 GB/s. For high performance file serving this makes a difference.
  • Mainboard: SuMi H12SSL-I with 1x CBL-SAST-0953 (verified myself with support, can be bought from any SuMi distributor)
  • RAM 64 or 128 GB, 3200 ECC
    IF I deploy spinning disks, I will probably use ZFS for the spinning disks only - ZFS loves RAM. I'll also test ZFS for the NVME array, but most likely ditch it for production. Rest of RAM is for containers, VMs,...
  • OS storage: 2x KIOXIA EXCERIA SSD 1000GB, M.2
    RAID 1 and the Kioxia plain for low price per GB paired with Low Power Standby feature in case of outages.
  • Fast data storage: 4, 6 or 8 WD DC SN640 1.6TB 2DWPD drives
    All together in RAID 5 for 2-4+ Mill IOPS. The WD drives just because of reliability and low latency (85 μs) compared to consumer NvMEs (300+ μs). 2 (maybe 4 if NT-board) drives connected to NVM Express connector, both in PCIe mode.
  • Backup storage: 4x WD RED + 4TB
    Plain for hot restores, collecting backups from other machines too overnight and platform to drop diff backups daily to Glacier or similar archive store.
  • NIC (besides from MB): MCX455A-ECAT (IB/EDR)

Any additional remarks or critics welcome.

Edit: changed MB from H12SSL-NT to -I after NPSs remark.
 
Last edited:
  • Like
Reactions: vladimir.mijatovic

lihp

Active Member
Jan 2, 2021
186
53
28
I am watching the H12SSL-i for about a year now (undecided if I really wanna invest in a EPYC compute-server) and found hints of a cable named CBL-SAST-0953 but I really don't know where you can buy one...
Listed it above and had the cable confirmed by SuMi. Actually by my experience SuMi boards are good - they do what you expect and deliver performance without too much bells or whistles. The Epyc platform with PCIe 4.0 is actually a reason for me to prefessionalize my server environment. Endless PCIe lanes (IO) coupled with great computing power (IPC+cores) enables HPC machines, which are to some extent even cheaper and way more reliable than creating an insane Workstation to cover all requirements in one.

Also the H12 SuMi boards are all ready for Milan Epyc also - so that offers a decent perspective.

But beware, the NVMe cabling is quite an expensive business. Loads of connectors to decide from but hard to find the perfect combination and most of the stuff is hard to find and not the price I expected for a "simple cable".
Worst case: order it from Ingram Micro as original part from SuMi. It's all good (by now - I almost went crazy looking it up)

Another interesting solution are simple PCIe cards without switch (so they require bifurcation but most modern server boards have that) like this one: Delock Produkte 89030 Delock PCI Express x16 Karte zu 4 x intern SFF-8654 4i NVMe - Bifurcation Same problem: find the cables for a good price.
You actually need double bifurcation. Bifurcation is x16 to x8/x8, double Bifurcation is x16 to x4/x4/x4/x4. All H12 SuMi boards can do that for some time now. I am quite confident there will be a SuMi addon too soon (TM). After all thats a really simple card and the price by Delock kinda crazy.

These cable problems leed to the situation that m.2 is much easier and most of the time substantially less expensive. But the U.2 drives have their advantages in performance, capacity, ease of cooling... :/
With the H12SSL-NT you are good for 4 U.2 drives (just 2x cables for 20 bucks each) optionally 6 drives when you also put an adapter in the M.2 slot (like 9 bucks each - thx @NPS for the hint). Just gotta take care on pricing for more drives in regards to cables and adapters.

Apart from that there may be some great M.2 drives incoming, which may make the U.2 drives obsolete. Most noteably Sabrent Rocket 4 Plus M.2 and Gigabyte - probably in February or March.

EDIT - ADD: on the other hand the DC SN640 drives come with a raidix license for up to 8 drives and cost : performance : reliability is great on those. Feels much better to plan with those than with consumer NvMEs.
 
Last edited:

NPS

Active Member
Jan 14, 2021
147
44
28
I think there is one big problem with your new configuration. Where to you want to connect your HDDs? The H12SSL-NT has no dedicated SATA ports. All you have for SATA are the two SlimSAS 8x ports. So if you want to connect SATA devices and don't need the 10GbaseT ports, I think you're better of with the H12SSL-i and some other solution for the NVMe devices no. 3&4. One solution is to use an AOC-SLG3-2M2 for your boot devices. I don't know whats really available for connecting PCIe gen4 U.2 to m.2 or x8/x16 slots.
 

NPS

Active Member
Jan 14, 2021
147
44
28
CPU: AMD Epyc 7262 or 7302P. Alternatively TR Pro "WEPYC" 3955WX (hesitant to maybe wait some more for Milan CPUs).
Main reason for the CPU is Epyc due to the amount of PCIe lanes ensuring future upgrades. 7262 or 7302P due to CPU architecure and price. Both offer 204.8 GB/s memory bandwidth per socket. CPUs like the 7232P, 7252, 7282 and similar only have 85.3 GB/s. For high performance file serving this makes a difference.
I think its a really interesting question if EPYC with high memory bandwith or Xeon W-22xx with higher clock speeds is faster for the "single user file server" use-case. For sure Xeon W is really tight on PCIe lanes and really uncool, but power efficient. WEPYC should be the performance king, but the boards I have seen until now are really uninspiring for creating a file server.
 

lihp

Active Member
Jan 2, 2021
186
53
28
Where to you want to connect your HDDs? The H12SSL-NT has no dedicated SATA ports.
The NVM Express ports on the board are either 8x SATA or 2x NvME - you can switch port function by jumper. I am actually not sure yet which way I go. Solely depends on cost on how I connect the 6x U.2 drives. Guess I plain let my SuMi distributor decide - they got the SuMi experts. But you are right I somehow missed the SATA connectors ;).

Thinking about it you are probably right: get the H12SSL-I, 8 SATA ports for spin disks, 2x U.2 drives by the NVMe port and 4 more by a PCIe card.

One solution is to use an AOC-SLG3-2M2 for your boot devices. I don't know whats really available for connecting PCIe gen4 U.2 to m.2 or x8/x16 slots.
Yep I agree. Especially connecting U.2 drives without a switch backplane is a mess or so it seems. Some expensive RAID controllers and stuff. Only somehow sane solution: the above mentioned Delock, even though I consider it a rip-off.
 

lihp

Active Member
Jan 2, 2021
186
53
28
I think its a really interesting question if EPYC with high memory bandwith or Xeon W-22xx with higher clock speeds is faster for the "single user file server" use-case. For sure Xeon W is really tight on PCIe lanes and really uncool, but power efficient. WEPYC should be the performance king, but the boards I have seen until now are really uninspiring for creating a file server.
Intel is PCIe 3.0 and for the W's "only" 64 PCIe lanes. Compared with Epyc PCIe 4.0 and 128 lanes, the XEON W's are imho castrated in IO. The Epycs are the same, when it comes to single core performance (which is needed for single-threaded loads). Bottom line - it all comes down for me to PCIe lanes. I need PCIe lanes for M.2 and U.2 drives. 6x U.2 (maybe even 8x) and 2x M.2 are already 32 (40) PCIe lanes. With Xeon W I am done at this point (x16 lanes for the MCX454A-ECAT and x8 lanes remain for chipset/SATA). Meaning upon deployment the XEON W server would be already full.

In fact if that server (EPYC) is as fast as I hope, Id prefer to make all other servers in future diskless too (more servers = more NVMe drives in this server = more IOPS aka maybe making this a SAN server in future).

Did some testing too:
  • on 7000 Epyc single threaded file loads (which only run on one core) cap at app. 8 GB/s (XFS), 7.8 GB/s (EXT4) or 5.2 GB/s (ZFS). On my test system it can probably be pushed further (different CPU, all memory banks full,...) still 9 GB/s is the absolute max, which one Epyc core can dish out on 7000.
  • on 7002 I figure anything from 10 GB/s - 12 GB/s per core is feasible (close to 14 GB/s on same gen consumer CPUs). I am quite sure performance on 7000 Epyc is limited to IPC, core speed, but also memory latency and bandwidth. So maybe the result on 7002 Epyc is close to 12 GB/s. Who knows mayb I can squeeze 1-2 more GB out of it with tuning... ;)
  • Milan should hit 14+ GB/s for single-threaded loads easily.
  • TGR Pro, starting with 3945WX should also hit the close to 14 GB/s mark. As Wepyc Id choose the M12SWA-TF (which is actually my future Workstation board ;) also). That board is imho made to be a Workstation, but also a file server: SATA for spin disks, 4 NvME slots and 2 U.2 slots. That actually looks almost as if it has been designed for my goals...
  • new TR may also be an option aka Genesis Peak.

So... go Epyc 7002? Wait for Milan? Wait for TR Pro release? Wait for next gen TR Genesis Peak? Actually all those releases should be 1st Quarter this year, latest 2nd Quarter... Also prices may drop until then,... yada yada yada.... These last paragraph thoughts bother me most atm ;) - luxury problem.
 
Last edited:

NPS

Active Member
Jan 14, 2021
147
44
28
Intel is PCIe 3.0 and for the W's "only" 64 PCIe lanes. Compared with Epyc PCIe 4.0 and 128 lanes, the XEON W's are imho castrated in IO. The Epycs are the same, when it comes to single core performance (which is needed for single-threaded loads)
I had the Xeon W-22xx in mind. Not the W-32xx. They are much cheaper, have only 48 lanes (sufficient for your current plan, see below), 4 mem channels (benchmarks are needed here, see below), really low idle power consumption, and high turbo clocks >=4.5GHz. I have no possibility for benchmarks, but I can not imagine that EPYC single-thread performance at ~3.4GHz is comparable to Xeon W-22xx at 4.6GHz. Multi-threaded is the polar opposite for sure!
The PCIe 4.0 is a topic in itself. That's a big advantage for EPYC but not a single device you plan on using, supports that! So either you should buy at least other NVMe drives, or this is no advantage for EPYC at the moment.
I need PCIe lanes for M.2 and U.2 drives. 6x U.2 (maybe even 8x) and 2x M.2 are already 32 (40) PCIe lanes.
Personally I would use SATA for the boot drives. That frees 8 lanes. So we have NIC (16) 8xNVMe (32) -> sum is 48 lanes
Meaning upon deployment the XEON W server would be already full.
This is true especially for W-22xx.
In fact if that server (EPYC) is as fast as I hope, Id prefer to make all other servers in future diskless too (more servers = more NVMe drives in this server = more IOPS aka maybe making this a SAN server in future).
Depending on how many NVMe you'll need then, I totally agree EPYC is the way to go.
I am quite sure performance on 7000 Epyc is limited to IPC, core speed, but also memory latency and bandwidth.
I would be really interested in the bandwidth part. As you have access to all these machines, could you maybe make some tests with less than 8 memory channels?
Milan should hit 14+ GB/s for single-threaded loads easily
We will see, but I think this will mainly depend on clock-speed and IPC.


So to underline that: I really like EPYC! Like I said before, I wanna have one myself! I just hesitate buying one because I may regret spending that much money on something I do not really "need". The only thing why I do not stop mentioning Xeon (sorry for being annoying) is, because I am really not sure which one is best for your needs. I see the Xeons advantages in the areas of price, idle-power, single-thread performance and the EPYC in everything else. So everything depends on your priorities I guess.
I wish I could do the benchmarking, but I don't have access to the hardware without buying, which I won't do. ;)
 

lihp

Active Member
Jan 2, 2021
186
53
28
I just hesitate buying one because I may regret spending that much money on something I do not really "need". The only thing why I do not stop mentioning Xeon (sorry for being annoying) is, because I am really not sure which one is best for your needs.
Setting up for a new workstation I had some wild ideas for NVMe RAIDs and stuff. Yet, I was always limited in performance, reliability, PCIe lanes,... Then I planned with a Threadripper instead. There the RAID setup and other stuff as well as OS limited IO and additional options. After all that I read about Mellanox cards (which we actually use) and had contact to @BLinux. So my idea was: Infinband based server as "NVMe" (SAN) over EDR/100G. This idea makes alot of sense, when it comes to small files, huge workloads,.. IO-intense operations - areas where consumer NVMe'S can't deliver.

So in the end the storage for my workstation plan was like 2 - 2.5K €. The server costs base 2K,. With upgrades for more uses, the server is now at ~3.7K € but now comes with backup, hot backup (maybe even close to real-time), remote backup of several remote servers, virtual machines, containers, test environment and backup archive management. Additionally it can be upgraded and might as well last more than 3 years, most likely even 8+ years. Future workstations only need a slot for GPU and the Mellanox card - that's it.

I actually save money this way.

Key to make it work are the insane PCIe lanes, Epyc core counts and the Mellanox cards. Ofc I could go less and be ok with 40G or FDR, but right now I already got the EDR/100G cards (laughably cheap) and cable. It seems that with just some 150-300 € more I may even be able to saturate EDR/100G aka 12+ GB/s.

The Mellanox card is PCIe x16 anyways + 8x U.2 NVMe = 48 lanes taken anyways, so Intel is a nogo always when counting in chipset and more. Going SATA for boot drive is also a nogo, since I will need 8x SATA on the long run for backups (even though I archive, more servers will be backed up). And some options I plan with would benefit from running on those boot NVMe's.

Thanks for raising the thought for alternate Intel option, but thinking about it, its a nogo.

Options now:
  1. Make server now with a cheap 7232P (which can be sold for some bucks still after 6 months or so - see 2.)
  2. Swap CPU for Milan CPU, once released or make a Milan server in late February or early March.
  3. Hope for a soon publicly available TR Pro like 3945WX or maybe soon released next gen TR/TR Pro.
 

NPS

Active Member
Jan 14, 2021
147
44
28
Options now:
  1. Make server now with a cheap 7232P (which can be sold for some bucks still after 6 months or so - see 2.)
  2. Swap CPU for Milan CPU, once released or make a Milan server in late February or early March.
  3. Hope for a soon publicly available TR Pro like 3945WX or maybe soon released next gen TR/TR Pro.
1. If you buy a 7232P now and find speeds to be lower than expected you will question yourself if thats because of 4-channel memory or anything else. So if you wanna by now, I think 7302P is the way to go. The 7262 is not cheap enough and will be harder to sell I guess.
2. If you buy Rome, I personally would buy Milan used in 1-2 years from now. If wait with your build for Milan, I have the feeling you will have to wait a few month more than you expect, except you have better sources via work.
3. TR Pro is interesting, but everything depends on the clock-speeds and prices of Milan. A Milan successor to the 7Fx2 series will rival TR Pro I guess, but could be much more expensive. I personally would prefer it nonetheless, because I do not like the TR Pro boards. To big, to much stuff I do not need, a little to "unprofessional". For next gen TR Pro you will have to wait a very long time, so this is out. Next gen TR (non-pro) would be out for me personally for a real server build. I think this will be more interesting for a workstation build.

So me personally will wait for broad Milan availability and then decide if Rome prices (either used or new) have fallen enough, or Milan is so much better, that I will pay the premium.