I'm looking for a sanity check on a plan to upgrade my server. This is what I have:
E3-1270V2 3.5 Ghz
Supermicro X9SCM-F
2 x 8 GB DDR3
850 Evo 500GB (OS)
Adaptec 78165 RAID HBA
6 x 6 TB 7200 RPM Hitachi He6 in RAID-6
Intel X540-T2 10Gb/s
Server 2019
I'm frequently moving tens of thousands of files in 100GB+ chunks between this and my desktop machine, which is directly connected to it with the same X540 NIC. Ideally, I'd be able to run things from the server at performance levels more akin to my local SSDs.
This desire is constrained by two problems:
1. Sustained write speeds suck. For file transfers under a certain size, it'll do 550 MB/s so. Long transfers that saturate the write cache drop to 100-150 MB/s or so. Or 350 MB/s sometimes; this inconsistency is a related problem.
3. Network IOPS/latency sucks. If I share the 850 Evo and benchmark it from my desktop, it's probably 15X slower than native.
My plan to address these is:
1. Upgrade the RAID controller to an 8885Q with MaxCache 4.0.
2. Add 4 x 200GB SAS 12Gb/s SSDs in RAID-0 for cache.
3. Upgrade the network cards to something that supports SMB Direct.
4. If necessary for PCIe bandwidth or whatever, swap the Intel setup to a Ryzen 3600 / AsRock X470D4U
I'll consider this project a success if I can write to the array at 1 GB/s consistently and, for anything cached by the SSDs, R/W at 10K IOPs or better.
Would my plan achieve this? If so, what are some economical 10Gb+ NICs that satisfy the SMB Direct requirement and don't have inordinate CPU overhead or configuration requirements? While I'm using CAT6 now, SFP or Infiniband cables would be fine too. No switches here to care about.
Thanks for any thoughts.
E3-1270V2 3.5 Ghz
Supermicro X9SCM-F
2 x 8 GB DDR3
850 Evo 500GB (OS)
Adaptec 78165 RAID HBA
6 x 6 TB 7200 RPM Hitachi He6 in RAID-6
Intel X540-T2 10Gb/s
Server 2019
I'm frequently moving tens of thousands of files in 100GB+ chunks between this and my desktop machine, which is directly connected to it with the same X540 NIC. Ideally, I'd be able to run things from the server at performance levels more akin to my local SSDs.
This desire is constrained by two problems:
1. Sustained write speeds suck. For file transfers under a certain size, it'll do 550 MB/s so. Long transfers that saturate the write cache drop to 100-150 MB/s or so. Or 350 MB/s sometimes; this inconsistency is a related problem.
3. Network IOPS/latency sucks. If I share the 850 Evo and benchmark it from my desktop, it's probably 15X slower than native.
My plan to address these is:
1. Upgrade the RAID controller to an 8885Q with MaxCache 4.0.
2. Add 4 x 200GB SAS 12Gb/s SSDs in RAID-0 for cache.
3. Upgrade the network cards to something that supports SMB Direct.
4. If necessary for PCIe bandwidth or whatever, swap the Intel setup to a Ryzen 3600 / AsRock X470D4U
I'll consider this project a success if I can write to the array at 1 GB/s consistently and, for anything cached by the SSDs, R/W at 10K IOPs or better.
Would my plan achieve this? If so, what are some economical 10Gb+ NICs that satisfy the SMB Direct requirement and don't have inordinate CPU overhead or configuration requirements? While I'm using CAT6 now, SFP or Infiniband cables would be fine too. No switches here to care about.
Thanks for any thoughts.
Last edited: