And so it begins... First AMD Ryzen AM4 server motherboard.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
The PCIe 4.0 part isn’t that relevant, however the increased number of PCIe lanes and connectivity is for some cases. X570 also officially supports 128GB total memory. The chipset itself does use quite a bit more power though compared to AM4 300/400 series.
 
Jul 16, 2019
45
4
8
I'm really interested in this board. I had read somewhere that PBO wasn't present on it, so it's a reflief to read that it is (and you disabled it). If you wouldn't mind, I'd love to see what kind of performance you're getting for Cinebench or the V-Ray benchmark.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Couple more minor points;
  • Yesterday's power measurements were taken with an old 650W PSU I had laying around (which I retired from my workstation for a lower-power and more efficient model); today I took readings with one of my usual high-efficiency PSUs, a Seasonic SS-350M1U (80+ Gold). Power at idle has dropped considerably - even though I've made no power tweaks, and I've added the extra DIMM, power usage at idle is a pretty damned impressive 23W. Just goes to show what a difference an overpowered PSU can make to your efficiency ratings*.
  • Power usage when turned off (i.e. idle draw + IPMI only) is 2.8W
  • Even with two dual-rank DIMMs**, memory speed remains at 2666MHz rather than dropping down to 2400. This is a nice bonus and contrary to ASRock's spec (although that was of course written for earlier Ryzen's).
  • Several hours were wasted trying to debug a network issue in my test/build VLAN; as it turned out I evidently forgot to save the switch config last time I set my IP helpers as they were blank. :facepalm:

I'd love to see what kind of performance you're getting for Cinebench or the V-Ray benchmark.
IIRC those are windows benchmarks which won't work on linux, but I'm pretty sure world+dog did those benches in the more mainstream 3700X reviews.

* Bear in mind I'm in the UK and thus using a 240V/220V mains which might be more efficient than NA 110V
** At least I'm fairly sure the 18ASF2G72AZ-2G6D1 are dual-rank DIMMs, dmidecode certainly seems to think so
 
Jul 16, 2019
45
4
8
Couple more minor points;
  • Yesterday's power measurements were taken with an old 650W PSU I had laying around (which I retired from my workstation for a lower-power and more efficient model); today I took readings with one of my usual high-efficiency PSUs, a Seasonic SS-350M1U (80+ Gold). Power at idle has dropped considerably - even though I've made no power tweaks, and I've added the extra DIMM, power usage at idle is a pretty damned impressive 23W. Just goes to show what a difference an overpowered PSU can make to your efficiency ratings*.
  • Power usage when turned off (i.e. idle draw + IPMI only) is 2.8W
  • Even with two dual-rank DIMMs**, memory speed remains at 2666MHz rather than dropping down to 2400. This is a nice bonus and contrary to ASRock's spec (although that was of course written for earlier Ryzen's).
  • Several hours were wasted trying to debug a network issue in my test/build VLAN; as it turned out I evidently forgot to save the switch config last time I set my IP helpers as they were blank. :facepalm:



IIRC those are windows benchmarks which won't work on linux, but I'm pretty sure world+dog did those benches in the more mainstream 3700X reviews.

* Bear in mind I'm in the UK and thus using a 240V/220V mains which might be more efficient than NA 110V
** At least I'm fairly sure the 18ASF2G72AZ-2G6D1 are dual-rank DIMMs, dmidecode certainly seems to think so
V-Ray has a Linux version (maybe Cinebench doesn't, I don't know). If you don't want to, that's fine. I am just interested in benchmarks on this particular mobo.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Some more numbers, this time of a real-world benchmark of interest to me - video encoding/transcoding, something that the 3700X seemed to be pretty good at during the published benches. I took a sample 12m 1920x1080 M2TS file and converted it, including audio. Both setups are running the same version of debian and the same (relatively old) version of ffmpeg. No fancy filters or multi-pass or anything, just a quick and easy crf 23 encode with reasonably high quality using the following command line:
Code:
/usr/bin/time -v ffmpeg -y -v warning -i 00009.m2ts -vf 'null' -acodec libopus -b:a 128k -vbr on -vcodec libx264 -preset slower -tune film -threads 2 00009.mkv
As per another thread I commented in recently, I usually don't run an x264 encode with more than two threads in order to preserve quality

Here's how my old machine did (not really used for encoding tasks other than as overspill from my main workstation):
Code:
User time (seconds): 5074.92
System time (seconds): 4.02
Percent of CPU this job got: 219%
Elapsed (wall clock) time (h:mm:ss or m:ss): 38:38.97
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 800828
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 180488
Voluntary context switches: 220939
Involuntary context switches: 25449
Swaps: 0
File system inputs: 0
File system outputs: 575616
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
...and the 3700X (total system power draw at 70W for two maxed-out threads):
Code:
User time (seconds): 3829.05
System time (seconds): 1.50
Percent of CPU this job got: 221%
Elapsed (wall clock) time (h:mm:ss or m:ss): 28:49.01
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 990344
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 35
Minor (reclaiming a frame) page faults: 95809
Voluntary context switches: 218482
Involuntary context switches: 349821
Swaps: 0
File system inputs: 10816
File system outputs: 576272
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Pretty sizeable improvements all around. Here's an x265 encode I'm running against the same file now for giggles; all standard config with no limit on threads:
Code:
shoe@frogstar:~$ /usr/bin/time -v ffmpeg -y -i 00009.m2ts -vf 'null' -acodec libopus -b:a 128k -vbr on -vcodec libx265 00009.mkv
ffmpeg version 3.2.14-1~deb9u1 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --prefix=/usr --extra-version='1~deb9u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 34.101 / 55. 34.101
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.101 / 57. 56.101
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libavresample   3.  1.  0 /  3.  1.  0
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
Input #0, mpegts, from '00009.m2ts':
  Duration: 00:12:22.58, start: 600.000000, bitrate: 27545 kb/s
  Program 1
    Stream #0:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
    Stream #0:1[0x1100]: Audio: pcm_bluray (HDMV / 0x564D4448), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s
x265 [info]: HEVC encoder version 0.0
x265 [info]: build info [Linux][GCC 6.3.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
x265 [info]: Main profile, Level-4 (Main tier)
x265 [info]: Thread pool created using 16 threads
x265 [info]: Slices                              : 1
x265 [info]: frame threads / pool features       : 5 / wpp(17 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 2
x265 [info]: Keyframe min / max / scenecut       : 23 / 250 / 40
x265 [info]: Lookahead / bframes / badapt        : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb       : 1 / 1 / 0
x265 [info]: References / ref-limit  cu / depth  : 3 / on / on
x265 [info]: AQ: mode / str / qg-size / cu-tree  : 1 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress            : CRF-28.0 / 0.60
x265 [info]: tools: rd=3 psy-rd=2.00 rskip signhide tmvp strong-intra-smoothing
x265 [info]: tools: lslices=6 deblock sao
Output #0, matroska, to '00009.mkv':
  Metadata:
    encoder         : Lavf57.56.101
    Stream #0:0: Video: hevc (libx265), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 23.98 fps, 1k tbn, 23.98 tbc
    Metadata:
      encoder         : Lavc57.64.101 libx265
    Stream #0:1: Audio: opus (libopus) ([255][255][255][255] / 0xFFFFFFFF), 48000 Hz, stereo, flt (24 bit), 128 kb/s
    Metadata:
      encoder         : Lavc57.64.101 libopus
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> hevc (libx265))
  Stream #0:1 -> #0:1 (pcm_bluray (native) -> opus (libopus))
Press [q] to stop, [?] for help
frame= 5700 fps= 47 q=-0.0 size=   26007kB time=00:03:58.35 bitrate= 893.8kbits/s speed=1.96x
top is currently showing ffmpeg using a relatively comical 1187% CPU currently so obviously SMT is doing well :) The Noctua HSF is getting pretty toasty however and the fan hasn't ramped up past 1100rpm yet so I think either temperatures need to be checked better or I need to tweak the fan profile some. Total system power draw under this load with all cores maxed is hovering at around the 120W mark. Here's a list of the individual clocks the cores are at currently:
Code:
root@frogstar:~# grep -i 'cpu mhz' /proc/cpuinfo
cpu MHz         : 3856.203
cpu MHz         : 3852.138
cpu MHz         : 3853.375
cpu MHz         : 3852.614
cpu MHz         : 3859.706
cpu MHz         : 3859.781
cpu MHz         : 3859.033
cpu MHz         : 3855.892
cpu MHz         : 3851.378
cpu MHz         : 3854.114
cpu MHz         : 3852.444
cpu MHz         : 3853.317
cpu MHz         : 3858.544
cpu MHz         : 3857.410
cpu MHz         : 3855.566
cpu MHz         : 3856.758
I'm pretty chuffed with all 8 cores at 3.8GHz for 120W.

For a single-threaded test like those for openssl, clocks are in the 4.2GHz range.
Code:
root@frogstar:~# grep -i 'cpu mhz' /proc/cpuinfo|sort -r -k 4,4
cpu MHz         : 4271.210
cpu MHz         : 4235.807
cpu MHz         : 2197.082
cpu MHz         : 2196.324
cpu MHz         : 2195.826
cpu MHz         : 2195.346
cpu MHz         : 2195.303
cpu MHz         : 2194.935
cpu MHz         : 2192.017
cpu MHz         : 2191.305
cpu MHz         : 2129.532
cpu MHz         : 2129.038
cpu MHz         : 2117.303
cpu MHz         : 2115.407
cpu MHz         : 2112.431
cpu MHz         : 2089.462
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Just to round off the evening, I dug a brand new D3-S4510 SSD out of the bits box and gave the SATA controllers a spin. Here's the edited highlights of running ssd-test.fio against an ext4-formatted drive.

AMD SATA:
Code:
seq-read: (groupid=0, jobs=1): err= 0: pid=4990: Wed Jul 17 22:29:33 2019
  read : io=7095.7MB, bw=121093KB/s, iops=30273, runt= 60003msec

rand-read: (groupid=1, jobs=1): err= 0: pid=4991: Wed Jul 17 22:29:33 2019
  read : io=7710.2MB, bw=131585KB/s, iops=32896, runt= 60001msec

seq-write: (groupid=2, jobs=1): err= 0: pid=4992: Wed Jul 17 22:29:33 2019
  write: io=10240MB, bw=237132KB/s, iops=59283, runt= 44219msec

rand-write: (groupid=3, jobs=1): err= 0: pid=5002: Wed Jul 17 22:29:33 2019
  write: io=10240MB, bw=234026KB/s, iops=58506, runt= 44806msec
ASMedia SATA:
Code:
seq-read: (groupid=0, jobs=1): err= 0: pid=5067: Wed Jul 17 22:38:57 2019
  read : io=4020.2MB, bw=68607KB/s, iops=17151, runt= 60001msec

rand-read: (groupid=1, jobs=1): err= 0: pid=5068: Wed Jul 17 22:38:57 2019
  read : io=5526.3MB, bw=94312KB/s, iops=23578, runt= 60001msec

seq-write: (groupid=2, jobs=1): err= 0: pid=5069: Wed Jul 17 22:38:57 2019
  write: io=8956.9MB, bw=152861KB/s, iops=38215, runt= 60001msec

rand-write: (groupid=3, jobs=1): err= 0: pid=5079: Wed Jul 17 22:38:57 2019
  write: io=8863.4MB, bw=151265KB/s, iops=37816, runt= 60001msec
LSI SATA (ubiquitous M1015 reflashed to 9211-8i):
Code:
seq-read: (groupid=0, jobs=1): err= 0: pid=20126: Wed Jul 17 23:12:50 2019
  read : io=7101.3MB, bw=121193KB/s, iops=30298, runt= 60001msec

rand-read: (groupid=1, jobs=1): err= 0: pid=20382: Wed Jul 17 23:12:50 2019
  read : io=7552.7MB, bw=128895KB/s, iops=32223, runt= 60001msec

seq-write: (groupid=2, jobs=1): err= 0: pid=20810: Wed Jul 17 23:12:50 2019
  write: io=10240MB, bw=191332KB/s, iops=47833, runt= 54804msec

rand-write: (groupid=3, jobs=1): err= 0: pid=20952: Wed Jul 17 23:12:50 2019
  write: io=10240MB, bw=190228KB/s, iops=47557, runt= 55122msec
Intel SATA (from a diff motherboard with a C226 chipset):
Code:
seq-read: (groupid=0, jobs=1): err= 0: pid=21330: Sat Jul 20 14:54:42 2019
  read : io=5932.6MB, bw=101248KB/s, iops=25311, runt= 60001msec

rand-read: (groupid=1, jobs=1): err= 0: pid=21547: Sat Jul 20 14:54:42 2019
  read : io=7759.6MB, bw=132428KB/s, iops=33107, runt= 60001msec

seq-write: (groupid=2, jobs=1): err= 0: pid=21688: Sat Jul 20 14:54:42 2019
  write: io=10240MB, bw=225607KB/s, iops=56401, runt= 46478msec

rand-write: (groupid=3, jobs=1): err= 0: pid=21693: Sat Jul 20 14:54:42 2019
  write: io=10240MB, bw=233645KB/s, iops=58411, runt= 44879msec
The AMD SATA controller is significantly faster here in almost every respect than the ASMedia one and - here I was very surprised - apparently faster than the LSI one too, and pretty much on par with the Intel one. AMDs SATA controller has certainly come a long way. If anyone wants more detailed numbers please let me know (although it probably needs a more thorough test suite TBH).

V-Ray has a Linux version (maybe Cinebench doesn't, I don't know). If you don't want to, that's fine. I am just interested in benchmarks on this particular mobo.
It doesn't show up in the debian repos, and from the looks of their site it's a proprietary GUI app anyway (no X installed here).

For what it's worth though, I don't suspect Ryzen 2 performance will vary too much on this motherboard from any others unless something is drastically wrong - all of the testing I've seen of people running the Ryzen 3000's on windows shows performance to be nearly identical when comparing the X570 boards to the older B350/B450 boards - from the way it's architected, only IO will be substantially different, so if you're happy with the CPU performance I don't see any reason why this board wouldn't work for you - although if you're planning on using one of the high-powered 12 or 16-core Ryzen's it might be worth asking ASRock about support status as power delivery might become an issue.

Edit: added stats from the Intel SATA controller
 
Last edited:
Jul 16, 2019
45
4
8
if you're planning on using one of the high-powered 12 or 16-core Ryzen's it might be worth asking ASRock about support status as power delivery might become an issue.
Yeah, I'd be looking at rocking a 3900X or 3950X in there. Good to see your the RAM stayed at 2666 with a 2nd DIMM, that's promising. Any plans to fill out all 4 DIMMs?

Also I just noticed no M.2....
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Yeah, I'd be looking at rocking a 3900X or 3950X in there. Good to see your the RAM stayed at 2666 with a 2nd DIMM, that's promising. Any plans to fill out all 4 DIMMs?
Memory upgrades is somewhat up in the air. As previously noted, my use of the Noctua HSF makes the first memory slot almost (but not quite) inaccessible, so I could conceivably use all four DIMM slots. However, as I didn't think the Ryzen 3000's memory controller would be so much improved, I had originally envisaged going to 2x32GB DIMMs if they became available. Time will tell how much those might end up costing, or how well future AGESA patches help with the memory.

Power-wise I think you'll be fine for the 3900X at least, given that this was rated for the 105W 2700X - I imagine the 3950X might be north of 140W.

Also I just noticed no M.2....
It definitely does...! There are two M.2 slots to the east of the socket; I'm running my test install off a P4101 currently and I plan to use them both in a RAID1. They're "only" PCIe 3.0 x2 and PCIe 2.0 x4 due to Ryzen's relatively anaemic PCIe lanes, but I don't see that ever becoming a limitation apart from e-peen.
 
  • Like
Reactions: Hydrogen_Bombaklot

elag

Member
Dec 1, 2018
79
14
8
Memory upgrades is somewhat up in the air. As previously noted, my use of the Noctua HSF makes the first memory slot almost (but not quite) inaccessible, so I could conceivably use all four DIMM slots. However, as I didn't think the Ryzen 3000's memory controller would be so much improved, I had originally envisaged going to 2x32GB DIMMs if they became available. Time will tell how much those might end up costing, or how well future AGESA patches help with the memory.
How big is the Noctua compared to the stock cooler? I am looking to use this board with a Ryzen5 3600, so I would guess that with stock Wraith coolers this should be ok? I am looking to deploy Linux ZFS on bare metal with some VMs, so 64 GB seems appropriate....

It definitely does...! There are two M.2 slots to the east of the socket; I'm running my test install off a P4101 currently and I plan to use them both in a RAID1. They're "only" PCIe 3.0 x2 and PCIe 2.0 x4 due to Ryzen's relatively anaemic PCIe lanes, but I don't see that ever becoming a limitation apart from e-peen.
Yeah I want to use the M2 slots with some 32GB Optane M10 SSDs as mirrored ZLOG. That should be doable, right?
System SSDs will be on Sata ports, with the 6 WD-REDs on the remaining SATA ports. I will just have to careful with the SSDs as these need to be on the X470 ports, the WD-RED should be ok on on the Asmedia 1061, even though that thing is slow....

One thing I have not been able to figure out is if there are any limitations in the PCI-E lane usage: there is of course the spilt from x16 to 2 x8 for the slots. But I often see with other X470 boards that using the second M2 slot disables the x4 slot. Is that the case here? The documentation does not seem to mention such limitations.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
At some point I'll do some close-up pics, but although the Noctua is a much smaller HSF overall, it's slightly wider than the bundled AMD cooler at its base due to the heatpipes splaying out slightly at the sides. If you used the bundled cooler I'm reasonably certain you'd have room to populate all DIMM slots (including heatspreaders) without issue.

I've only tested it the other M2 slot with a "toy" 16GB optane, but it works fine. I'm not a huge ZFS user but I don't think it's going to be bandwidth limited.

I'll still be using my M1015 when I migrate the build, but yeah the inbuilt SATA ports are considerably better than I expected - if you can get away with six HDDs and live without SGPIO I wouldn't hesitate to use them.

From what I read of the documentation, yeah you're able to use all the PCIe slots as well as the M2's. On page 13 of the manual there's a block diagram showing the PCIe lane allocation and the only mux is the bifurcation between the 16x/2*8x slots. It's a really good physical and logical layout of all the available lanes IMHO and I wish this board received a bit more love...!
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Some pics as promised; the motherboard, CPU, Noctua NH-L9x65 SE-AM4, Intel P4101 and two Crucial 16GB Crucial ECC UDIMMs. As can be seen from the stickers, the board is at hardware revision 1.01, the BIOS came with v1.50 and the BMC with v1.50. Both use replaceable chips.

motherboard.jpeg

A closer view of the memory slots showing the extremely tight fit between the heatpipes of the NH-L9x65 SE-AM4 and the first DIMM slot.

memory_slots.jpeg

Close-up after I've placed a DIMM in the first slot; I haven't pushed it in all the way, but I would be able to. Note however the curve on the DIMM itself; pushing the DIMM in all the way would alleviate this since the heatpipe is right up against the top of one of the memory chips - pushing it in would give a mm or two extra wiggle room (although there'd still be a risk of shorts). Memory with heat-spreaders wouldn't have a chance of fitting. I think it's safe to say that the NH-L9x65 SE-AM4 might not be best choice for those who are looking to populate all four DIMMs.

tight_fit.jpeg

I didn't take any pictures at the time as I was just eyeballing it, but I can confirm that the cooler bundled with the CPU clears the memory without issue.
 
  • Like
Reactions: tracehopper

Ojref1

New Member
Oct 8, 2018
19
5
3
I have a suggestion, you may want to try Dynatron solutions for 2U+ with that socket AM4: https://www.dynatron.co/product-page/a24
They also have some 1U solutions as well that might fit up. I know they are rated for 95W TDP, but in my experience, in a 1U or 2U chassis where you have push/pull fans and a flow chamber, they tend to perform way better than their maximum rated TDP would otherwise suggest. I've used this vendor's solutions in the past with a lot of supermicro/tyan and other 3rd tier servers with great success.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
The PCIe 4.0 part isn’t that relevant, however the increased number of PCIe lanes and connectivity is for some cases. X570 also officially supports 128GB total memory. The chipset itself does use quite a bit more power though compared to AM4 300/400 series.
Isnt the bottleneck still the x4 lanes between the chipset and the CPU?

Imgur
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
As @i386 said it's 64Gbps not bytes. And two NVMe SSDs can saturate that.
Sorry, I did my default math on a x16, not a x4 for some reason.

Still 8GB/sec is quite a bit.

There's already a dedicated nvme off the CPU, so put your heavy-hitting stuff there and time-share the rest. If you need more, drop one or more in a regular pci-e slot.
 

elag

Member
Dec 1, 2018
79
14
8
The first PCIE slot does support bifurcation of PCIE3x16 when the second slot is not used, so put in a 4*PCIE card and you can have 4 NVME drives at PCIE3x4.... PCIE4 would of course have been nice between the processor and the chipset so you could add even more PCIE3 devices without saturating the CPU-chipset link...
 

elag

Member
Dec 1, 2018
79
14
8
Sorry, I did my default math on a x16, not a x4 for some reason.

Still 8GB/sec is quite a bit.

There's already a dedicated nvme off the CPU, so put your heavy-hitting stuff there and time-share the rest. If you need more, drop one or more in a regular pci-e slot.
This is not true, both NVME slots are served by the X470 see the block diagram on p13 of the manual
 

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Isnt the bottleneck still the x4 lanes between the chipset and the CPU?
Yes, but at double the bandwidth for X570 vs X470 due to PCIe 4.0. Once X570 server-lite motherboards come out from ASRock I probably would put a quad bifurcation M.2 adapter card on the x16 slot coming off the CPU, then use the downstream (chipset) slots for other stuff such as 10G NIC, HBA, etc.

I forgot to add that a point of interest for me is how Ryzen 3000's IMC handles fully populating the memory slots. Ryzen 2000 was a bit improvement over original Ryzen, but Ryzen 2000's IMC still struggles with all memory slots populated.