Workstation: 1 NVMe 2TB drive, 10gbit NIC (Aquantia AQC107)
NAS: 8 SATA 2TB SSD's in RAID 5 2x10gbit NIC (Intel) EPYC 7282 with 32GB ram
Network: Unifi 16XG switch has the NAS connected directly on 2 bonded 10gbit ports (so 20gbit in theory), it's connected to a XS508M 10gbit switch, which is connected to the workstation with a single link
I'm trying to figure out why the transfer speed of our NAS to a workstation is limited to ~200MB/s. I've already ruled out local disk speed on the workstation, copying the same files from one dir to the other is at NVMe theoretic max speed, which sounds reasonable to me. So now the question is, can the NAS not read faster from its filesystem, or can the network not transfer faster.
So next step that should be easy but I couldn't find any decent tools is to figure out the filesystem performance. Anyone know of a tool that will read a bunch of files into memory or into /dev/null or whatever? All I find is tools that benchmark block devices, I'm trying to gauge real world performance.
Advice on a better solution than the dmraid RAID5 is welcome, but I'm trying to explore why it's not hitting the theoretical maximums before rearranging everything.
NAS: 8 SATA 2TB SSD's in RAID 5 2x10gbit NIC (Intel) EPYC 7282 with 32GB ram
Network: Unifi 16XG switch has the NAS connected directly on 2 bonded 10gbit ports (so 20gbit in theory), it's connected to a XS508M 10gbit switch, which is connected to the workstation with a single link
I'm trying to figure out why the transfer speed of our NAS to a workstation is limited to ~200MB/s. I've already ruled out local disk speed on the workstation, copying the same files from one dir to the other is at NVMe theoretic max speed, which sounds reasonable to me. So now the question is, can the NAS not read faster from its filesystem, or can the network not transfer faster.
So next step that should be easy but I couldn't find any decent tools is to figure out the filesystem performance. Anyone know of a tool that will read a bunch of files into memory or into /dev/null or whatever? All I find is tools that benchmark block devices, I'm trying to gauge real world performance.
Advice on a better solution than the dmraid RAID5 is welcome, but I'm trying to explore why it's not hitting the theoretical maximums before rearranging everything.