Just installed the Commell M2-225 2.5GbE adapter in my 5070s today, works great, and they're able to max out the entire 2.5g interface when vMotion-ing within ESXi.
I forget, how are people buying these?Just installed the Commell M2-225 2.5GbE adapter in my 5070s today, works great, and they're able to max out the entire 2.5g interface when vMotion-ing within ESXi.
I purchased them from Global American. Requested a quote and they got back to me pretty quick.I forget, how are people buying these?
Pls post the (full)Well... I don't know what I expected from this experiment but I put a Intel 900P 280gb in the PCI-E slot but it seems to be maxing out FIO 4k RANDOM READ / WRITE at 70K IOPS. I guess the CPU is too slow to keep the drive fed, I guess I'll swap it with a NIC and use this as an appliance. I really wish there was a Zen3 or 12th gen mini pc with a HHFL PCIE slot.
fio
command(s) you used, and their IOPS number(s).Hi, how much did they charge for 1? Thx MuchI purchased them from Global American. Requested a quote and they got back to me pretty quick.
![]()
M2-225 - 2.5 Gigabit Ethernet LAN M.2 Module - Global American
A-E Key 2230 M.2 PCIe Interface LAN Moduleglobalamericaninc.com
I figured that would only effect total bandwidth not latency for random 4K read and writes.you're probably aware, but just incase you missed it, the pcie slot is pcie gen2 x4.
I'm not 100% what the FIO command PTS uses for their backed but here is what I have:Pls post the (full)fio
command(s) you used, and their IOPS number(s).
fio is a very well-engineered program. In fact, it is so flexible, with so many variables/options, that it is easy to (mistakenly) distort one's results.
In anticipation of setting up a 5070ext similar to what you describe (but with a P1600X), I am anxious, and motivated, to identify (and hopefully resolve) any (suspected) bottlenecks in its performance with Optane drives.
nota@e5070-900p:/900p$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --rw=randwrite --numjobs=4 --group_reporting
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
fio-3.28
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=583MiB/s][w=149k IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=14574: Mon Jul 3 13:48:20 2023
write: IOPS=159k, BW=620MiB/s (650MB/s)(16.0GiB/26437msec); 0 zone resets
bw ( KiB/s): min=595048, max=732488, per=100.00%, avg=636345.54, stdev=14824.24, samples=208
iops : min=148762, max=183122, avg=159086.42, stdev=3706.08, samples=208
cpu : usr=15.79%, sys=56.46%, ctx=4163537, majf=0, minf=44
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,4194304,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=620MiB/s (650MB/s), 620MiB/s-620MiB/s (650MB/s-650MB/s), io=16.0GiB (17.2GB), run=26437-26437msec
Disk stats (read/write):
nvme0n1: ios=0/4192182, merge=0/5, ticks=0/75796, in_queue=75796, util=99.66%
nota@e5070-900p:/900p$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --rw=randread --numjobs=4 --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
fio-3.28
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=847MiB/s][r=217k IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=14584: Mon Jul 3 13:48:45 2023
read: IOPS=233k, BW=909MiB/s (953MB/s)(16.0GiB/18023msec)
bw ( KiB/s): min=864768, max=1065488, per=100.00%, avg=933031.89, stdev=22370.48, samples=142
iops : min=216192, max=266372, avg=233257.97, stdev=5592.62, samples=142
cpu : usr=25.67%, sys=74.18%, ctx=3131, majf=0, minf=298
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=4194304,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=909MiB/s (953MB/s), 909MiB/s-909MiB/s (953MB/s-953MB/s), io=16.0GiB (17.2GB), run=18023-18023msec
Disk stats (read/write):
nvme0n1: ios=4169667/3, merge=0/1, ticks=60592/1, in_queue=60592, util=99.48%
Then, you beat it, without mercy (all 4 cores, 64 threads each)seems to be maxing out FIO 4k RANDOM READ / WRITE at 70K IOPS
Thanks, but I plan to unbox mine in a few days. I expect to get aboutIf you want to provide an FIO command to test I'd be happy to try it.
From my recent testing the best I've been able to get is still that 233k/159k read/write bandwidth, I think the problem is that the CPU is bottleneck the benchmark, you get better performance for the first few seconds when it's boosting. The performance tops out at 4 jobs and 3+ iodepth, anything beyond that doesn't improve performance.My, my ... [very :-J (tongue-in-cheek)]
First, you belittle your cute little box, saying it is weak ... (publicly, no less)
Then, you beat it, without mercy (all 4 cores, 64 threads each)
[achieving 230k/160k r/w 4c64t/4c64t]
You really need to apologize to it; they're very sensitive, you know.
(it might run away from home ... or worse)
[/:-J]
Thanks, but I plan to unbox mine in a few days. I expect to get about
330k/200k [2c3t/1c3t]
We'll see ...
From a digital/electronic perspective, it should be A-OK.using a H1110 (LSI SAS9211-4I IT Mode, LSI SAS2004 controller) on the 5070 (extended) to connect additional drives. The drives will be powered (and mounted) externally but I'm trying to see what I may be missing from a compatibility perspective
mini-SAS to 4x SATA.How will you get a SAS2 signal cable out of the box? (whether miniSAS-to-miniSAS or mini-SAS-to-4x breakout)?
If I keep the wyse & drives in separate cases, I was actually thinking of removing the Parallel port and using that opening/exit instead so the cable don't need a "sharp" turn under the card to reach the option port which may make the run slightly longer.Will you commit the Option-port (the punchout below the PCIe slot bracket opening)?
Good reminderAnd, even if so, CAN you get there? (from the sff-8087 connector location on the card)
====
"Measure twice, cut/buy once."
Do you say that mostly because of the cable routing or something else? Do you know of a specific 4-external card that is PCIe x4?I believe the 4e model would be more appropriate.
Another option is you can remove the Dell logo, which will leave a round hole in the front.mini-SAS to 4x SATA.
If I keep the wyse & drives in separate cases, I was actually thinking of removing the Parallel port and using that opening/exit instead so the cable don't need a "sharp" turn under the card to reach the option port which may make the run slightly longer.
4I = internal, so any model with 4e or 8e is externalDo you say that mostly because of the cable routing or something else? Do you know of a specific 4-external card that is PCIe x4?
Thanks!
Hold that thought for a minute.If I keep the wyse & drives in separate cases, ...
Yes, much better (but moot).I was actually thinking of removing the Parallel port and using that
Gack! That dichotomy of aesthetic is painful.If I end up moving everything to a single case, ...
WOW! oh my... I've been looking at the board and checking routing options and... completely missed that[**] Fret not the x8 card; the WyseExt slot is open-ended
That makes a lot of sense - thank you for the insight!The clear winner is the Dell H200e, a LSI 9200-8e equiv[**]; uses the SAS2008. E.g., [Link].
It uses just 1-2w more than the -4i, but easy growth to 8 drives. Now, here's why external is a win: Instead of some "ghetto claptrap duopoly"
To which, I had replied:From my recent testing the best I've been able to get is still that 233k/159k read/write bandwidth, ...
I made this estimate based on tests in my workstation (HP Z4G4 w/W-2145 CPU**) with the SSD's PCIe slot (bios-)limited to Gen2; and factoring those results relative to the Passmark (single-thread) #s (W2145:J5005) of 2609:1205. [Yes, Passmark isn't gospel; but "close enuf fer guvmnt work"?]I plan to unbox mine in a few days. I expect to get about 330k/200k [2c3t/1c3t]
We'll see ...
onetst.sh
:#!/bin/bash
[ $# -ne 3 ] && echo Usage $0 numjobs iodepth BLOCKSIZE && exit 1
fio --name=onetst \
--filename=Test \
--filesize=20g --rw=randread --bs=$3 --direct=1 --overwrite=0 \
--numjobs=$1 --iodepth=$2 --time_based=1 --runtime=10 \
--ioengine=libaio --gtod_reduce=1 --group_reporting
onetstw.sh
:#!/bin/bash
[ $# -ne 3 ] && echo Usage $0 numjobs iodepth BLOCKSIZE && exit 1
fio --name=onetstw \
--filename=Test \
--filesize=20g --rw=randwrite --bs=$3 --direct=1 --overwrite=0 \
--numjobs=$1 --iodepth=$2 --time_based=1 --runtime=10 \
--ioengine=libaio --gtod_reduce=1 --group_reporting
@tiny:/mnk [ 1284 ] # onetst.sh 4 3 4k
onetst: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=3
...
fio-3.30
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=1088MiB/s][r=279k IOPS][eta 00m:00s]
onetst: (groupid=0, jobs=4): err= 0: pid=9918: Mon Jul 17 13:59:06 2023
read: IOPS=278k, BW=1087MiB/s (1140MB/s)(10.6GiB/10001msec)
bw ( MiB/s): min= 1086, max= 1090, per=100.00%, avg=1088.22, stdev= 0.45, samples=76
iops : min=278072, max=279150, avg=278584.74, stdev=115.00, samples=76
cpu : usr=31.44%, sys=68.18%, ctx=18199, majf=0, minf=50
IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=2782328,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=3
Run status group 0 (all jobs):
READ: bw=1087MiB/s (1140MB/s), 1087MiB/s-1087MiB/s (1140MB/s-1140MB/s), io=10.6GiB (11.4GB), run=10001-10001msec
Disk stats (read/write):
nvme0n1: ios=2754733/0, merge=0/0, ticks=41651/0, in_queue=41651, util=99.01%
@tiny:/mnk [ 1326 ] # onetstw.sh 3 3 4k
onetstw: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=3
...
fio-3.30
Starting 3 processes
Jobs: 3 (f=3): [w(3)][100.0%][w=475MiB/s][w=122k IOPS][eta 00m:00s]
onetstw: (groupid=0, jobs=3): err= 0: pid=10311: Mon Jul 17 15:06:53 2023
write: IOPS=121k, BW=474MiB/s (498MB/s)(4746MiB/10002msec); 0 zone resets
bw ( KiB/s): min=481344, max=489144, per=100.00%, avg=486326.32, stdev=717.49, samples=57
iops : min=120336, max=122286, avg=121581.58, stdev=179.37, samples=57
cpu : usr=13.81%, sys=61.09%, ctx=1286106, majf=0, minf=26
IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1214906,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=3
Run status group 0 (all jobs):
WRITE: bw=474MiB/s (498MB/s), 474MiB/s-474MiB/s (498MB/s-498MB/s), io=4746MiB (4976MB), run=10002-10002msec
Disk stats (read/write):
nvme0n1: ios=0/1202028, merge=0/0, ticks=0/19113, in_queue=19113, util=99.01%
I have a dual E5-2670v4 blade with a U.2 900P drive in it and I'm able to push 596K/454k with 4 jobs 3 iodepth and 4k blocks. So we are no where near the drive limit lol.** very similar to YOUR Dell_5820 w/W-2140b. You could test the 900P there. bios-limiting the slot to Gen2... but, why?![]()