Very shortly and VERY luckily I'll be walking into 4 custom built AMD EPYC 9334 SP5 systems each with 256GB DDR5 4800, Tyan S8050GM4NE-2T, 10 x Samsung 990 Pro 2TB (2 on board, 8 on ASUS PCIE 5 Card w/ mobo bifurcation). That being said, I'd like to set up a VSAN (or similar) environment to maximize the performance of these 4 servers for a medium-size business environment. There is an Exchange VM, a DC, a file server, several proprietary application servers (database heavy), and an RDS Collection.
As of this moment, my bottleneck will be the switch interconnecting and the NICs--both 10Gbit, though I could aggregate easily and add some extra via the PCIe slots.
My primary concern though is performance as I would really like to take advantage of the hardware and from what I have read, not doing this correctly can crush and pretty much render useless an all flash solution like this. Thing is, I've seen a lot of do this, do that, and most of the threads are dealing with enterprise grade storage. While I'm certain the 990s will be more than adequate, I'd like to make sure that whichever direction I choose is optimal.
So, I am open to any and all recommendations/guidance/etc/etc.
Thanks in advance!
As of this moment, my bottleneck will be the switch interconnecting and the NICs--both 10Gbit, though I could aggregate easily and add some extra via the PCIe slots.
My primary concern though is performance as I would really like to take advantage of the hardware and from what I have read, not doing this correctly can crush and pretty much render useless an all flash solution like this. Thing is, I've seen a lot of do this, do that, and most of the threads are dealing with enterprise grade storage. While I'm certain the 990s will be more than adequate, I'd like to make sure that whichever direction I choose is optimal.
So, I am open to any and all recommendations/guidance/etc/etc.
Thanks in advance!