Maximize Data Transfers Between Two NAS' (with 10GbE)

eptesicus

Member
Jun 25, 2017
95
13
8
33
I have two NAS builds with an H310 HBA (IT mode), ASRock E3C224 mobo, i3-1241v3 CPU, 32GB RAM, Chenbro NR40700 chassis, Mellanox Connectx-3 CX312A 10GbE NICs, and 24x HGST NAS 4TB drives (software RAID6). I just migrated from a Supermicro SC846 chassis with the SAS2 backplane which exhibited the same problem I'm having.

When performing a data transfer between the servers (10GbE networking through Ubiquiti ES-16-XG switch), I'm finding that data transfers over SMB on Openmediavault top out at about 480MBps, when, considering the setup, I'd expect closer to 700MBps transfer. iPerf between the servers pretty much saturates the 10GbE connection, and making a local file transfer between the RAID6 array and ramdisk on each server doesn't get much faster than 480MBps.

I doubt it's the Mellanox NICs with iPerf saturating 10GbE, but could it be the NICs and they need to be tuned?

I also doubt it's the backplanes, since I've exhibited the same speeds on my SM SAS2 backplane and my new Chenbro backplanes. I should note that each SAS port on my H310 connects to each CB3 port on the Chenbro backplanes.

I'd like to figure out if I can optimize data speeds, but I'm also curious if I'm going to encounter the same issue when I move to FreeNAS/ZFS in a couple months with 10TB drives.

Ideas?
 

TeeJayHoward

Active Member
Feb 12, 2013
374
109
43
If copies from a local RAM disk to the array (or vice versa) are capping out at 480MB/s, then the network has nothing to do with it. (If I'm misunderstanding, and the RAM disk was on a different machine, sorry!) How many and what size files are you moving? 100x 100MB files might not saturate the interface while a single 100GB file would. Have you done any SMB tuning?

My next step when troubleshooting something like this would be to pop open iostat and top while a transfer is going and see what's maxing out.
 

eptesicus

Member
Jun 25, 2017
95
13
8
33
If copies from a local RAM disk to the array (or vice versa) are capping out at 480MB/s, then the network has nothing to do with it. (If I'm misunderstanding, and the RAM disk was on a different machine, sorry!) How many and what size files are you moving? 100x 100MB files might not saturate the interface while a single 100GB file would. Have you done any SMB tuning?

My next step when troubleshooting something like this would be to pop open iostat and top while a transfer is going and see what's maxing out.
Right now, I'm transfering thousands of files, and the file sizes are variable, averaging between 10GB and 25GB, still maxing out around 480MBps.

No SMB tuning has taken place.

Top during large file transfer: attached

Output of iostat during large file transfer:
# iostat
Linux 4.19.0-0.bpo.4-amd64 (NAS01) 06/13/2019 _x86_64_ (8 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
1.55 0.00 1.84 0.39 0.00 96.23

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdy 3.83 31.70 20.90 69363277 45742740
sda 7.88 3710.98 90.94 8120780929 199000403
sdc 7.89 3711.12 91.57 8121084519 200384891
sdb 7.88 3710.98 91.30 8120779068 199784779
sdi 7.89 3711.23 91.61 8121343946 200474491
sdg 7.89 3710.88 91.63 8120573813 200512487
sdm 7.90 3711.07 92.81 8120973977 203106503
sdh 7.89 3711.28 91.82 8121436156 200926579
sdo 7.90 3710.90 92.44 8120608197 202284879
sds 7.90 3711.20 92.39 8121276383 202173391
sdn 7.90 3711.17 93.06 8121205479 203644699
sdp 7.90 3711.25 91.72 8121385958 200722975
sdt 7.90 3711.16 91.51 8121174431 200262611
sdv 7.90 3711.15 92.28 8121167086 201933235
sdu 7.90 3711.17 92.45 8121196908 202303299
md0 125.83 77801.50 1945.63 170254076796 4257642700
sdk 7.89 3710.99 92.57 8120801105 202581331
sdl 7.89 3711.03 92.04 8120905896 201401919
sdw 7.88 3710.83 90.60 8120468283 198257603
sde 7.88 3711.12 91.27 8121098517 199719023
sdr 7.90 3711.23 92.36 8121326380 202120787
sdd 7.88 3711.07 91.86 8120978700 201029243
sdf 7.89 3711.10 91.74 8121049644 200760907
sdx 7.91 3711.16 91.34 8121188508 199884931
sdj 7.90 3711.11 92.70 8121078444 202847207
sdq 7.89 3711.07 90.66 8120974198 198394687
 

Attachments