NETWORKING
The Zen-based Zeppelin die has integrated four 10G MACs (Confirmed to support 10GBASE-KR, and possibily an alternate 1000BASE-KX mode for 1G), which are exposed only in the EPYC Embedded series. I would love to see them being actually used, but there are a lot of considerations:
DRIVER SUPPORT: Due to the fact that the integrated 10G MACs are only rarely seen being used, I'm not even aware about OS Driver support for them. The Linux Kernel has builtin Drivers, so they work there, but there is no information regarding Windows support. If AMD doesn't provide Windows Drivers for these 10G MACs, they are automatically discarded from any consumer oriented Motherboard. Period. In comparison, Intel NICs seems to be usually widely supported.
FEATURES, CAPABILITIES AND PERFORMANCE: Due to the lack of public documentation, the feature set and other capabilities of the Zeppelin integrated 10G MACs are unknow. While I would expect that they have overally a lower latency than any PCIe NIC (Integrated MAC -> 10GBASE-KR -> PHY vs Integrated PCIe Controller -> PCIe Bus -> PCIe NIC with integrated MAC and PHY), there are other features to consider like SR-IOV for PCI Passthrough in virtualization scenarios, network processing offloading, overall CPU usage, and any other thing that is reelevant for someone spending money in 10G networking gear. So far, everything points out that the Zeppelin integrated 10G MACs offers only basic connectivity. When it comes to features, Intel NICs seems to be the premier solution (Except on some Remote DMA scenarios, which only the highest end of its NICs supports).
LANE COST: In some of the publicly available presentations, supposedly AMD said that each 10GBASE-KR lane requires to team up two PCIe Lanes from the Zeppelin SoC, thus having all 4 10GBASE-KR lanes has a cost of 8 PCIe lanes. However, based on the Block Diagrams of the available EPYC Embedded Motherboards that uses the 10G MACs, like those that are based on the COM Express Type 7 Form Factor, each 10GBASE-KR lane takes only one PCIe Lane each instead of two, since otherwise, it would be exceeding Zeppelin known total amount of PCIe Lanes, which is 32. This discovery makes the integrated 10G MACs better than expected.
LANE MULTIPLEXING: Again, due to the lack of public documentation, I don't have idea if the four 10G MACs 10GBASE-KR lanes are multiplexed in a SERDES controller that otherwise does only PCIe, or if is entangled with the 8 lanes that are known to do both PCIe and SATA. Basically, the difference is whenever the Zeppelin die second 16x integrated PCIe Controller can be configured as 4x10GBASE-KR + 8xSATA/8xPCIe + 4 pure PCIe or only 4x10GBASE-KR + 4xSATA/4xPCIe + 8 pure PCIe. If the first option is not possible, I would consider that the Intel NICs holds an advantage because being able to use two OCuLink Ports to do two 4x NVMe Drives or 8 SATA via breakout cables is better than just having one OCuLink Port. Since Zeppelin SoC integrated SATA Controller should have nothing to envy to discrete HBA SATA Controllers (As anything SATA is considered low end and has few extra optional features that aren't already baseline, there is no way that Zeppelin integrated SATA is significantly worse than any discrete one), if the integrated 10G MACs feature set ends up being mediocre, it could be preferable to use these 4 multiplexed lanes as SATA and instead use the pure PCIe lanes to throw in a PCIe NIC.
BANDWIDTH BOTTLENECK: Assuming that the 4 10GBASE-KR lanes are fully capable of working at the expected 1.25 GiB p/s rate each so that they can saturate all four 10G links, they provide a higher effective bandwidth than if going with a PCIe NIC, since each PCIe 3.0 lane provides only 1 GiB p/s. This means that with a Quad Port PCIe 3.0 4x NIC like the Intel X710-TM4, only up to three 10G links would simultaneously work at full speed, whereas 4 would face a bottleneck due to having 1 GiB p/s less than required (Four 10G links would require 5 GiB p/s, 4x PCIe 3.0 lanes provides only 4 GiB p/s). Going with a NIC with 8x PCIe lane connectivity is absolutely overkill, and for a mere 10G Quad Port even looks bad, since 8 PCIe 3.0 lanes provides 8 GiB p/s bandwidth, which can confortably feed 6 10G links.