These are first generation connectx devices, from 2009.Those have CX3 level connectors, not the smoother CX4/5 level ones, thats why I don't think they are cx4. Also very very cheap for cx4.
These are first generation connectx devices, from 2009.Those have CX3 level connectors, not the smoother CX4/5 level ones, thats why I don't think they are cx4. Also very very cheap for cx4.
No, the weirdness of the problem which might be one of my weird problems
Individually or all? I think it was just one which was not working properly.
Hm yeah does not look as easy as it should.
Got it, no worries. Just a case of me being a knowitall and commenting on obvious stuff
Lots of (relatively) cheap 40G options out there nowadays
Glad you are getting there
Many I assume. Most will not be mesasuring raw network performance though unless they are troubleshooting or dabbling with new network cards; I agree the ones who do run it should be aware
- There is a bit of an interesting point here, though.
- How many people may just run iperf (which doesn't show packet loss directly with the most often referenced commands), get a good enough speed, and call it a day?
- I think there are three pieces to this puzzle: (a) iperf3 (as it shows retransmissions and iperf does not); (b) iperf/iperf3 -u switch (udp) which directly shows packet loss, and finally (3) monitor esxtop for actual packet loss (as packet loss shown in iperf could simply be a reporting issue). Specific to iperf3 there is a bug with how -u loss is shown (as an example).
You mean absolutely hard facts of advantages of 40/100 vs 10 for my personal setup?
- May I ask the dumb question: What does 40G or 100G actually get you? You can saturate 10G on ZFS without having to sell a kidney; however, I think making it to 20G would be quite impressive, difficult, and expensive.
- For me, it seems that 2 x 10G is more than enough, 10G for storage and 10G for vMotion / VMs / Management is plenty with room to spare. But I don't have the most amazing lab in the world, so unless you have a cluster beyond 3 hosts, I really don't see a need for a 40G fabric. (but just because I don't see it doesn't mean it isn't there)
- Thanks for the kind words.
Many I assume. Most will not be mesasuring raw network performance though unless they are troubleshooting or dabbling with new network cards; I agree the ones who do run it should be aware
You mean absolutely hard facts of advantages of 40/100 vs 10 for my personal setup?
Absolutely nothing as it is running now
What it should be though, if at some point RDMA etc is running, and if all my weird issues were to be resolved - it would allow the jump from ~1000 MB/s to 3000 MB/s as theoretical maximum of the underlying hardware (nvme).
Do I really *need* that? No
Well I assume if I were to reach 3GB probably(if tech like nvme Raid allowed it) I'd be looking for more... and more... and maybe more
And you shouldn't forget that DC's are not used by single persons like many homelabs are. So its all about density/aggregation ...
It just takes a lot of drives. Once you give up on caring about power draw and noise a lot of fun stuff becomes possible. The ZFS array I built last year can sustain ~25gbps no problem, all spinning drives (other than slog). Don't think I spent more than $2k total. R720 with 2x MD1200's full of drives. Massive and fast, just how I like itMay I ask the dumb question: What does 40G or 100G actually get you? You can saturate 10G on ZFS without having to sell a kidney; however, I think making it to 20G would be quite impressive, difficult, and expensive.
Additionally, recently I think I've realized that you can only optimize for one objective: (a) cost efficiency or (b) stability / maximum uptime.
It just takes a lot of drives. Once you give up on caring about power draw and noise a lot of fun stuff becomes possible. The ZFS array I built last year can sustain ~25gbps no problem, all spinning drives (other than slog). Don't think I spent more than $2k total. R720 with 2x MD1200's full of drives. Massive and fast, just how I like it
fio randwrite:
Run status group 0 (all jobs):
WRITE: io=41121MB, aggrb=2466.8MB/s, minb=2466.8MB/s, maxb=2466.8MB/s, mint=16673msec, maxt=16673msec
fio randread:
Run status group 0 (all jobs):
READ: io=41121MB, aggrb=2664.1MB/s, minb=2664.1MB/s, maxb=2664.1MB/s, mint=15437msec, maxt=15437msec
Very interesting - but that is async speed is it not? sync speed should be limited by the optane drive (limited as in ~600 MB vs 2500 MB)
Write Speed - MB/s
1MB recordsize
compression=off
No slog 1 Optane 900p 2 Optane 900p 2 Optane 900p
20G vDisk 20G vDisks 20G vDisks - Mirrored
async sync sync sync
RaidZ2 6x2x10.0 TB 1043 734 806 594
RaidZ 3x4x10.0 TB 1149 747 856 611
The intuition that striping across mirrored pairs is faster than raidz1 or raidz2 does not hold in certain perspectives, especially for read bandwidth, and more surprisingly, even for rewrite bandwidth.
- Sure I get that 16 drives in RAID 0 = 3,200 MB/s but that is in RAID 0.
- As soon as you add fault tolerance your pool takes a massive hit, as in RAID 5 those same 16 drives = 1,280 MB/s.
- It isn't until you add 24 more drives (40 total), that you hit 3,200 MB/s again.
capacity write rewrite read
24x 4TB, 12 striped mirrors, 45.2 TB, w=696MB/s , rw=144MB/s , r=898MB/s
24x 4TB, raidz, 86.4 TB, w=567MB/s , rw=198MB/s , r=1304MB/s
24x 4TB, raidz2, 82.0 TB, w=434MB/s , rw=189MB/s , r=1063MB/s