Yessir, vSAN loves it some 10G data planes to chew on :-D 1GigE will scrape ya by, gotta go dedicated in that case though, too much other competing traffic unless ya SIOC shape/limit traffic types to not monopolize the vSAN traffic.
Can your current disk IO/config crank out enough iops/throughput to saturate a 1GbE conn?...Looks like if you get everything tuned/playin' nice which leads me to my final question, do you have 10G in this env? If not max theoretical throughput I believe you will see is 125MBps (take some off for overhead) across any disk subsystem (being network limited) if you are testing system to system across GigE.
well I found out something interesting. whin the ESXi 6.0 host if I do the iperf test. I can not break the 5GB
here is the loopback test
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64] time ./iperf -c127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 47.8 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 13170 connected with 127.0.0.1 port 5001
[ 3] 0.0-10.0 sec 4.92 GBytes
4.22 Gbits/sec
real 0m 10.03s
user 0m 0.00s
sys 0m 0.00s
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64]
now test it with 1mb
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64] time ./iperf -c127.0.0.1 -w 1M
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 1.01 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 37843 connected with 127.0.0.1 port 5001
[ 3] 0.0-10.0 sec 5.04 GBytes
4.33 Gbits/sec
real 0m 10.03s
user 0m 0.00s
sys 0m 0.00s
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64]
same 1MB test on the none loop back via virtual switch
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64] time ./iperf -c172.100.1.11 -w 1M
------------------------------------------------------------
Client connecting to 172.1.1.11, TCP port 5001
TCP window size: 1.01 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 172.1.1.11 port 40461 connected with 172.100.1.11 port 5001
[ 3] 0.0-10.0 sec 5.04 GBytes
4.33 Gbits/sec
real 0m 10.04s
user 0m 0.00s
sys 0m 0.00s
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64]
now I repeat the same three test above with virtual hosts that will have ZFS file system
loop back test 65k and 1MB
root@napp-it-15b:/usr/local/bin# time ./iperf -c127.0.0.1 -w 64k
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 64.0 KByte
------------------------------------------------------------
[ 3] local 127.0.0.1 port 42478 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 16.7 GBytes
14.3 Gbits/sec
real 0m10.027s
user 0m0.764s
sys 0m9.032s
root@napp-it-15b:/usr/local/bin# time ./iperf -c127.0.0.1 -w 1M
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 3] local 127.0.0.1 port 40472 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 29.8 GBytes
25.6 Gbits/sec
real 0m10.259s
user 0m0.804s
sys 0m9.143s
root@napp-it-15b:/usr/local/bin#
same 1MB test on the none loop back via virtual switch
root@napp-it-15b:/usr/local/bin# time ./iperf -c172.100.1.25 -w 1M
------------------------------------------------------------
Client connecting to 172.100.1.25, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 3] local 172.100.1.25 port 62920 connected with 172.100.1.25 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 27.9 GBytes
23.9 Gbits/sec
real 0m10.921s
user 0m0.807s
sys 0m9.130s
root@napp-it-15b:/usr/local/bin#
this test from ESXi host to VM ZFS via virtual switch with no NIC attach
------------------------------------------------------------
Client connecting to 172.100.1.25, TCP port 5001
TCP window size: 1.01 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 172.100.1.11 port 51338 connected with 172.100.1.25 port 5001
[ 3] 0.0-10.0 sec 4.92 GBytes
4.23 Gbits/sec
real 0m 10.03s
user 0m 0.00s
sys 0m 0.00s
[root@ESXi-2:/vmfs/volumes/530b77a8-eb4863d0-5cbb-0010185a2572/iperf/iperf_2.0.2-4_amd64]
I can't do it from VM ZFS client to ESXi host, it keep failing. part that puzzle me is why the iperf test for ESXI host are slow low.
as far if I max out my NIC, I don't think it was never max out, when I had vSAN test it has 1 SSD and three HDD on each host.
I just got a hold of some infiniband card, wait for it to get here so I can test is again, in the mean time, this is what my DD bench look like in the VM ZFS
Memory size: 65536 Megabytes
write 204.8 GB via dd, please wait...
time dd if=/dev/zero of=/XCCESSdp/dd.tst bs=32768000 count=6250
6250+0 records in
6250+0 records out
204800000000 bytes transferred in 130.698642 secs (1566963489 bytes/sec)
real 2:10.7
user 0.0
sys 2:10.4
204.8 GB in 130.7s = 1566.95 MB/s Write
wait 40 s
read 204.8 GB via dd, please wait...
time dd if=/XCCESSdp/dd.tst of=/dev/null bs=32768000
6250+0 records in
6250+0 records out
204800000000 bytes transferred in 58.179951 secs (3520112946 bytes/sec)
real 58.2
user 0.0
sys 58.1
204.8 GB in 58.2s = 3518.90 MB/s Read
so bottom line is, I don't think my disk is a bottle neck but some where in the ESXi host having issue. I try on multiple different ESXi host and it produce pretty much similar result. I even go as far as go down to ESXi 5.5..... unless I did something wrong but the iper result pretty much disappoint me for ESXi host