I'd imagine the chiplets will be the same as that's one of the biggest benefits to NOT using a monolithic die for Ryzen. By sharing the same chiplet, the sellable yields will be higher. Chips not suitable for Epyc can be brought down the stack and pushed via Ryzen sales.
It's also effectively...
Here's some data. I have a SuperMicro server with 2x Intel Xeon Gold 6142 (15c @ 2.6Ghz) with 384GB of memory. I have a 24-port LSI HBA with 8x Read Intensive SATA drives and 8x Write Intensive SATA Drives (No expander). I also have 4x Read Intensive NVMe drives. (All drives are enterprise SSDs...
Just to be clear, this is storage spaces direct--the distributed storage solution that Microsoft is doing these days. It's a 4-node configuration (2-socket Intel 6142 or 6148 with 384GB memory).
Each node is using 8x read-intensive Enterprise SATA SSDs.
Networking is 100Gb Mellanox for RDMA...
32 drives (8 per node in a 4-node configuration) is a reasonably common configuration. Most customers I talk to are deploying hybrid configurations with 2-4 SSD cache drives (NVMe, SAS, or SATA) and 8-12 7.2k HDDs for capacity (per node).
Micron does their 5100 Enterprise SATA drive in a M.2 form factor. Full power loss protection and the other enterprise features:
Micron 5100 5100 PRO 1.88 TB Internal Solid State Drive - SATA - M.2 2280 - MTFDDAV1T9TCB-1AR1ZA - 00649528781383 - PCNation.com
I do performance testing on all-flash SDS solutions on Microsoft. Mirrors in storage spaces with flash scale to higher performance than hardware raid options using the same devices (SAS or SATA -- I haven't tested the NVMe hardware RAID but I would expect them to limit performance over what...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.