planned build: critiscism welcome - server + workstation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lihp

Active Member
Jan 2, 2021
186
53
28
1. If you buy a 7232P now and find speeds to be lower than expected you will question yourself if thats because of 4-channel memory or anything else. So if you wanna by now, I think 7302P is the way to go. The 7262 is not cheap enough and will be harder to sell I guess.
2. If you buy Rome, I personally would buy Milan used in 1-2 years from now. If wait with your build for Milan, I have the feeling you will have to wait a few month more than you expect, except you have better sources via work.
3. TR Pro is interesting, but everything depends on the clock-speeds and prices of Milan. A Milan successor to the 7Fx2 series will rival TR Pro I guess, but could be much more expensive. I personally would prefer it nonetheless, because I do not like the TR Pro boards. To big, to much stuff I do not need, a little to "unprofessional". For next gen TR Pro you will have to wait a very long time, so this is out. Next gen TR (non-pro) would be out for me personally for a real server build. I think this will be more interesting for a workstation build.

So me personally will wait for broad Milan availability and then decide if Rome prices (either used or new) have fallen enough, or Milan is so much better, that I will pay the premium.
Waiting for next week for now. Except I just saw the Samsung OEM Enterprise SSD PM1733 3.84TB, U.2 hit the floor. Its itching ;) Dont look at the price per unit but at the price per Gigabyte - its aweseome - dirt cheap. It beats any viable current consumer NVMe's right now in pricing and performance.
 

NPS

Active Member
Jan 14, 2021
147
44
28
They are available for months now and the reason, why I said: "If you go for EPYC PCIe gen4, buy disks that make use of it!" As far as I know, in a single-user setup, faster disks are way more important than more disks.
 

lihp

Active Member
Jan 2, 2021
186
53
28
1. If you buy a 7232P now and find speeds to be lower than expected you will question yourself if thats because of 4-channel memory or anything else. So if you wanna by now, I think 7302P is the way to go. The 7262 is not cheap enough and will be harder to sell I guess.
Agreed.

2. If you buy Rome, I personally would buy Milan used in 1-2 years from now. If wait with your build for Milan, I have the feeling you will have to wait a few month more than you expect, except you have better sources via work.
I actually did hardware (Especially CPU, RAM) wholesales sales, when I was young. So work or privately, I am good in that park.

3. TR Pro is interesting, but everything depends on the clock-speeds and prices of Milan. A Milan successor to the 7Fx2 series will rival TR Pro I guess, but could be much more expensive. I personally would prefer it nonetheless, because I do not like the TR Pro boards
<snip>
All agreed except the board (s). I love the Supermicro board (well yeah I am a SuMi fanboy ;) ). Since anything else I get these days is SuMi, I look forward to have a WKS board from SuMi too. Will wait a few more days (work is calling) and then see pricing and if AMD hit some releases...

They "are starring" at me - "starry nights":
BHDC9531.jpg
 
  • Like
Reactions: nnunn

lihp

Active Member
Jan 2, 2021
186
53
28
They are available for months now and the reason, why I said: "If you go for EPYC PCIe gen4, buy disks that make use of it!" As far as I know, in a single-user setup, faster disks are way more important than more disks.
On disks it depends. For mostly large file (I do graphics a lot), the bandwidth is most important. On normal work, like booting, App startup,... low latency is important. Those Samsung Enterprise NVMe's are actually my kink there...
 

NPS

Active Member
Jan 14, 2021
147
44
28
All agreed except the board (s). I love the Supermicro board (well yeah I am a SuMi fanboy ;) ). Since anything else I get these days is SuMi, I look forward to have a WKS board from SuMi too.
I really like my three Supermicro boards aswell, but I don't like this new TR Pro board. My workstation is based on a Fujitsu D3643H. A really great no-nonsense board. Sadly they sold this business to Kontron. Would love a TR Pro design from Fujitsu. For server/EPYC Supermicro is perfect.
 
  • Like
Reactions: lihp

lihp

Active Member
Jan 2, 2021
186
53
28
but I don't like this new TR Pro board.
Hmm why? The only difference of the M12SWA-TF to a Server board is the audio chip and alot of USB. The USB imho make sense, the audio is actually good. This depends mostly on how well SuMi implemented it. Both, USB + audio, imho belong on a workstation board.

I agree that another board, a "WEPYC" board aka TR Pro server board would be cool.
 

NPS

Active Member
Jan 14, 2021
147
44
28
It's just my personal preferences, but:
  1. The board is way to big! Selection in cases is quite limited. Size is a (minor) cost-factor, too.
  2. I don't get the workstation/server mix in many of these boards. I do not need and do not want to pay (in parts and power consumption -> environmental impact) inclusion of IPMI in a workstation . In a server, on the other hand, I do not need the WRX80 and all the stuff that is connected to it. For me personally the situation is even more extreme. I think I do not need WRX80 at all since a H12SSL-i has everything I need in a workstation board. I use a USB interface for sound. A H12SSL-i without IPMI and POSTing fast, would be quite perfect for me, but I see and totally accept that people want sound on their workstation board. ;) Inclusion of maybe 2 fast USB ports is also ok, but for example Asrock ROMED8-2T shows, that you do not need a fat WRX80 for that.
  3. The fan on the WRX80 makes me sad for many reasons including noise.
  4. 10GbaseT via Marvell AQC113C in my eyes is a cheap "solution" not quite fitting to such an expensive board. I totally get that there are many people who want to have fast Ethernet onboard because they do not want to waste slots they need for GPUs, but in that case I would want to have a "proper" NIC. Me personally, I don't want to have this at all (same reasons as with IPMI). 2.5GbE on board would be OK for me, because it doesn't waste so much energy.
So to sum it up this board in my eyes is taylored to the 4*double-slot-GPU use-case. Most other use-cases would be better served with a much smaller board, I think.
 

lihp

Active Member
Jan 2, 2021
186
53
28
  1. The board is way to big! Selection in cases is quite limited. Size is a (minor) cost-factor, too.
  2. I don't get the workstation/server mix in many of these boards. I do not need and do not want to pay (in parts and power consumption -> environmental impact) inclusion of IPMI in a workstation . In a server, on the other hand, I do not need the WRX80 and all the stuff that is connected to it. For me personally the situation is even more extreme. I think I do not need WRX80 at all since a H12SSL-i has everything I need in a workstation board. I use a USB interface for sound. A H12SSL-i without IPMI and POSTing fast, would be quite perfect for me, but I see and totally accept that people want sound on their workstation board. ;) Inclusion of maybe 2 fast USB ports is also ok, but for example Asrock ROMED8-2T shows, that you do not need a fat WRX80 for that.
  3. The fan on the WRX80 makes me sad for many reasons including noise.
  4. 10GbaseT via Marvell AQC113C in my eyes is a cheap "solution" not quite fitting to such an expensive board. I totally get that there are many people who want to have fast Ethernet onboard because they do not want to waste slots they need for GPUs, but in that case I would want to have a "proper" NIC. Me personally, I don't want to have this at all (same reasons as with IPMI). 2.5GbE on board would be OK for me, because it doesn't waste so much energy.
  1. Size: I am with you. In that case imho the Asrock Epyc microATX is an option. Then again on a workstation you usually want many pcie slots... choices, choices. For Epyc 7000 they got the embedded boards for the embedded Epyc CPUs - those are cool, I use them also here. I actually hope for something similar soon (TM) for Rome and Milan.
  2. I consider IPMI quite important. It's actually what I expect from SuMi: The possibility to have all machines (server + WKS) hardware in one management console. On the sound chip we probably talk about 3-5 bucks - fine with me.
  3. I actually plan to replace the fan. If I am in a good mood, I will turn my workstation into a water-cooled one.
  4. Yeah on the network. I am fine with 2-4 1G NICs teamed up for Internet and DMZ. Internally its EDR anyways for me. That NIC gets on my nerves too - yes.
Bottom line: I am with you on the 10G NIC. On sound I understand you pretty well.
 

NPS

Active Member
Jan 14, 2021
147
44
28
microATX is really nice! At the moment I use nothing bigger. But for EPYC 7xxx I would prefer ATX. The Asrock board shows why. I think EPYC 7xxx is not that interesting if it is castrated by board size in terms of memory channels and PCIe lanes. With EPYC 7000 you meant EPYC 3xxx I guess? I mean Naples, Rome, Milan. Yes EPYC 3xxx is quite interesting but you name it: Zen1 does not impress me for non-embedded use-cases and for things like 10GbE NAS the supermicro boards are quite strange. The Xeon-D boards are much more interesting. I am hesitating with Asrock Rack. To many people with problems, but maybe that's only the loud people screaming around. Don't know. They make really interesting boards feature wise!

Hmm, I can not imagine what you use IPMI in your workstation for. But it's only important that you know. ;)

I had water cooling about 20 years ago for about 5 years on Socket A. Was cool. Not interested anymore. Most important to me is low idle noise. I think that is easier to achieve with air cooling. As air cooling is quite calm at 100% CPU usage these days, too, I am fine.

In a workstation I need 1x 1GbE NIC for connection to my "IPMI network" and maybe a second one for Internet, depending on how the fast network is connected to everything else. So this makes 2-3 in total. slow(IPMI), medium(LAN) plus an optional fast(NAS/SAN).
 

lihp

Active Member
Jan 2, 2021
186
53
28
Hmm, I can not imagine what you use IPMI in your workstation for. But it's only important that you know. ;)
Thats easy. You either dont have the software tools or (like I) don't like to install hardware monitoring tools on your desktop workstation. So with that said, IPMI for:
  • BIOS and screen remote control (which must be secured of course) - great when in holiday
    I actually did sit on the beach in Mexico and had to do some customer stuff over my ipad - too cool to have that BIOS control too.
  • IPMI watchdog is also cool, even on a workstation
  • central hardware monitoring independent of the OS
  • remote automated installation
  • ...
I love IPMI with SuMi.

I had water cooling about 20 years ago for about 5 years on Socket A. Was cool. Not interested anymore. Most important to me is low idle noise. I think that is easier to achieve with air cooling. As air cooling is quite calm at 100% CPU usage these days, too, I am fine.
I am not yet willing for a server rack, nor for a server case. I am "afraid" heat might be my worst enemy. And yes, I believe to be skilled in settign up airflow and stuff, but the case I want might be not that good for "air cooling only" of that machine...


In a workstation I need 1x 1GbE NIC for connection to my "IPMI network" and maybe a second one for Internet, depending on how the fast network is connected to everything else. So this makes 2-3 in total. slow(IPMI), medium(LAN) plus an optional fast(NAS/SAN).
I prefer 4-5 network ports onboard:
  • IPMI: 1 NIC
  • easy way: 4 bond - adaptive load balancing, failover
  • secure 1: 1 external, 2x bond internal adaptive load balancing + failover, 2x DMZ 1/2
  • secure 2: 2x bond external, 2x bond internal network, 1 DMZ
And from now on 1 additional Mellanox internal (see previous pictures) EDR/100G for iSER.
 

lihp

Active Member
Jan 2, 2021
186
53
28
Slight update on sizing...

By now its a few weeks to Milan release. So I wait for Milan Epycs or I hit the road in case I find some crazy perfect Rome CPU deal. Eg. right now here in Germany is a 7502P on ebay at 1250+ bucks which is still over my limit for CPUs, yet... Still seems I prefer to wait considering the performance gain of the new lineup and that those will most likely be available and at recommended price.

Layout of server changed "drastically" - reason in each bullet.

Epyc Server:
  • CPU: AMD Epyc Milan, except if I find a deal I cant say no to for a Rome CPU.
  • Mainboard: SuMi H12SSL-I
  • RAM 128 GB, 3200 (or more depending on Milan recommendation) ECC 2R HD
    Considering virtual machines, containers, caching and performance I prefer to fill all 8 banks with 16GB each. Less doesn't really make sense. In case of 16 cores, that's 8GB/core which is imho a minimum anyways.
  • OS storage: 2x KIOXIA EXCERIA SSD 1000GB, M.2
    I prefer OS separate from anything.
  • Fast data storage: 4x Samsung OEM Enterprise SSD PM1735 3.2TB, PCIe 4.0 x8
    Performance is crazy and they are dirt cheap. Actually cheaper than consumer NVMe's per GB. 4 drives is the max for a free raidix license, so thats actually max performance on storage possible. Lower latency and way higher IOPS than planned. Still a fundamental budget increase, but I will take good use of them.
  • Backup storage: 4x WD RED + 4TB
    Plain for hot restores, collecting backups from other machines too overnight and platform to drop diff backups daily to Glacier or similar archive store.
  • NIC (besides from MB): MCX455A-ECAT (IB/EDR)
Bottom line: drastic budget increase (2-3x) for a sick performance gain, which should carry me a long way, hopefully 6+ years, where10-12+ GB/s may still be fast.