(US) 90 dollar Wyse 5070 Thin client/mini-server?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SourceQuality

New Member
Jun 30, 2023
2
5
3
  • Like
Reactions: Samir and Fritz

applepi

Member
Jun 15, 2013
86
67
18
Well... I don't know what I expected from this experiment but I put a Intel 900P 280gb in the PCI-E slot but it seems to be maxing out FIO 4k RANDOM READ / WRITE at 70K IOPS. I guess the CPU is too slow to keep the drive fed, I guess I'll swap it with a NIC and use this as an appliance. I really wish there was a Zen3 or 12th gen mini pc with a HHFL PCIE slot.
 
  • Like
Reactions: Samir

UhClem

just another Bozo on the bus
Jun 26, 2012
439
253
63
NH, USA
Well... I don't know what I expected from this experiment but I put a Intel 900P 280gb in the PCI-E slot but it seems to be maxing out FIO 4k RANDOM READ / WRITE at 70K IOPS. I guess the CPU is too slow to keep the drive fed, I guess I'll swap it with a NIC and use this as an appliance. I really wish there was a Zen3 or 12th gen mini pc with a HHFL PCIE slot.
Pls post the (full) fio command(s) you used, and their IOPS number(s).

fio is a very well-engineered program. In fact, it is so flexible, with so many variables/options, that it is easy to (mistakenly) distort one's results.

In anticipation of setting up a 5070ext similar to what you describe (but with a P1600X), I am anxious, and motivated, to identify (and hopefully resolve) any (suspected) bottlenecks in its performance with Optane drives.
 
  • Like
Reactions: Samir

abq

Active Member
May 23, 2015
675
204
43
  • Like
Reactions: Samir

applepi

Member
Jun 15, 2013
86
67
18
Pls post the (full) fio command(s) you used, and their IOPS number(s).

fio is a very well-engineered program. In fact, it is so flexible, with so many variables/options, that it is easy to (mistakenly) distort one's results.

In anticipation of setting up a 5070ext similar to what you describe (but with a P1600X), I am anxious, and motivated, to identify (and hopefully resolve) any (suspected) bottlenecks in its performance with Optane drives.
I'm not 100% what the FIO command PTS uses for their backed but here is what I have:


If you want to provide an FIO command to test I'd be happy to try it.

I'd expect at PCI-E 2.0 that we should be closer to 2.0 GB/S and real world around 1.7 GB/s.

Edit: looks like it's a CPU bound issue as adding more threads adds more performance until we run out of threads.

Bash:
nota@e5070-900p:/900p$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --rw=randwrite --numjobs=4 --group_reporting
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
fio-3.28
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=583MiB/s][w=149k IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=14574: Mon Jul  3 13:48:20 2023
  write: IOPS=159k, BW=620MiB/s (650MB/s)(16.0GiB/26437msec); 0 zone resets
   bw (  KiB/s): min=595048, max=732488, per=100.00%, avg=636345.54, stdev=14824.24, samples=208
   iops        : min=148762, max=183122, avg=159086.42, stdev=3706.08, samples=208
  cpu          : usr=15.79%, sys=56.46%, ctx=4163537, majf=0, minf=44
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,4194304,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=620MiB/s (650MB/s), 620MiB/s-620MiB/s (650MB/s-650MB/s), io=16.0GiB (17.2GB), run=26437-26437msec

Disk stats (read/write):
  nvme0n1: ios=0/4192182, merge=0/5, ticks=0/75796, in_queue=75796, util=99.66%
nota@e5070-900p:/900p$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --rw=randread --numjobs=4 --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
...
fio-3.28
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=847MiB/s][r=217k IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=14584: Mon Jul  3 13:48:45 2023
  read: IOPS=233k, BW=909MiB/s (953MB/s)(16.0GiB/18023msec)
   bw (  KiB/s): min=864768, max=1065488, per=100.00%, avg=933031.89, stdev=22370.48, samples=142
   iops        : min=216192, max=266372, avg=233257.97, stdev=5592.62, samples=142
  cpu          : usr=25.67%, sys=74.18%, ctx=3131, majf=0, minf=298
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=4194304,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=909MiB/s (953MB/s), 909MiB/s-909MiB/s (953MB/s-953MB/s), io=16.0GiB (17.2GB), run=18023-18023msec

Disk stats (read/write):
  nvme0n1: ios=4169667/3, merge=0/1, ticks=60592/1, in_queue=60592, util=99.48%
 
Last edited:
  • Like
Reactions: Samir

UhClem

just another Bozo on the bus
Jun 26, 2012
439
253
63
NH, USA
My, my ... [very :-J (tongue-in-cheek)]
First, you belittle your cute little box, saying it is weak ... (publicly, no less)
seems to be maxing out FIO 4k RANDOM READ / WRITE at 70K IOPS
Then, you beat it, without mercy (all 4 cores, 64 threads each)
[achieving 230k/160k r/w 4c64t/4c64t]

You really need to apologize to it; they're very sensitive, you know.
(it might run away from home ... or worse)
[/:-J]

If you want to provide an FIO command to test I'd be happy to try it.
Thanks, but I plan to unbox mine in a few days. I expect to get about
330k/200k [2c3t/1c3t]
We'll see ...
 
  • Haha
  • Like
Reactions: Samir and Patriot

applepi

Member
Jun 15, 2013
86
67
18
My, my ... [very :-J (tongue-in-cheek)]
First, you belittle your cute little box, saying it is weak ... (publicly, no less)

Then, you beat it, without mercy (all 4 cores, 64 threads each)
[achieving 230k/160k r/w 4c64t/4c64t]

You really need to apologize to it; they're very sensitive, you know.
(it might run away from home ... or worse)
[/:-J]


Thanks, but I plan to unbox mine in a few days. I expect to get about
330k/200k [2c3t/1c3t]
We'll see ...
From my recent testing the best I've been able to get is still that 233k/159k read/write bandwidth, I think the problem is that the CPU is bottleneck the benchmark, you get better performance for the first few seconds when it's boosting. The performance tops out at 4 jobs and 3+ iodepth, anything beyond that doesn't improve performance.

I think the step forward will be to figure out how to get the CPU `intel_pstate` into active mode so I can set `performance` governor and maybe eek out a bit more performance.
 
  • Like
Reactions: Samir

labxplore

New Member
Sep 12, 2022
16
17
3
Wondering if anyone has tried using a H1110 (LSI SAS9211-4I IT Mode, LSI SAS2004 controller) on the 5070 (extended) to connect additional drives. The drives will be powered (and mounted) externally but I'm trying to see what I may be missing from a compatibility perspective if I go that route... Objective is to set it up as a NAS under Proxmox...
 
  • Like
Reactions: Samir

UhClem

just another Bozo on the bus
Jun 26, 2012
439
253
63
NH, USA
using a H1110 (LSI SAS9211-4I IT Mode, LSI SAS2004 controller) on the 5070 (extended) to connect additional drives. The drives will be powered (and mounted) externally but I'm trying to see what I may be missing from a compatibility perspective
From a digital/electronic perspective, it should be A-OK.
But, analog/mechanically, it depends ...
How will you get a SAS2 signal cable out of the box? (whether miniSAS-to-miniSAS or mini-SAS-to-4x breakout)?
Will you commit the Option-port (the punchout below the PCIe slot bracket opening)?
And, even if so, CAN you get there? (from the sff-8087 connector location on the card)
====
"Measure twice, cut/buy once."
 
  • Like
Reactions: Samir and labxplore

labxplore

New Member
Sep 12, 2022
16
17
3
How will you get a SAS2 signal cable out of the box? (whether miniSAS-to-miniSAS or mini-SAS-to-4x breakout)?
mini-SAS to 4x SATA.

Will you commit the Option-port (the punchout below the PCIe slot bracket opening)?
If I keep the wyse & drives in separate cases, I was actually thinking of removing the Parallel port and using that opening/exit instead so the cable don't need a "sharp" turn under the card to reach the option port which may make the run slightly longer.

And, even if so, CAN you get there? (from the sff-8087 connector location on the card)
====
"Measure twice, cut/buy once."
Good reminder :) I saw that there are either 3.3ft/1m or 1.6ft/0.5m breakout cables so considering the card is 3.12in long, I may be able to use 5in internally and have around 34in left to enter the other case...
If I end up moving everything to a single case, then it should work as well, but I definitely need to "practice" with a dry run :)


I believe the 4e model would be more appropriate.
Do you say that mostly because of the cable routing or something else? Do you know of a specific 4-external card that is PCIe x4?

Thanks!
 
  • Like
Reactions: Samir

heromode

Active Member
May 25, 2020
380
203
43
mini-SAS to 4x SATA.


If I keep the wyse & drives in separate cases, I was actually thinking of removing the Parallel port and using that opening/exit instead so the cable don't need a "sharp" turn under the card to reach the option port which may make the run slightly longer.
Another option is you can remove the Dell logo, which will leave a round hole in the front.

Do you say that mostly because of the cable routing or something else? Do you know of a specific 4-external card that is PCIe x4?

Thanks!
4I = internal, so any model with 4e or 8e is external
 
  • Like
Reactions: Samir and labxplore

UhClem

just another Bozo on the bus
Jun 26, 2012
439
253
63
NH, USA
If I keep the wyse & drives in separate cases, ...
Hold that thought for a minute.
I was actually thinking of removing the Parallel port and using that
Yes, much better (but moot).
If I end up moving everything to a single case, ...
Gack! That dichotomy of aesthetic is painful.
Separate is the way to go, and not just for form, but also for function.

[my prior post was actually a "set-up" for this ...]
@Fritz was on the right track, but ...

The clear winner is the Dell H200e, a LSI 9200-8e equiv[**]; uses the SAS2008. E.g., [Link].
It uses just 1-2w more than the -4i, but easy growth to 8 drives. Now, here's why external is a win: Instead of some "ghetto claptrap duopoly", you have a nice tidy little cottage with an RV parked nearby. And, you can just detach the RV, easily--maybe to mow the lawn--or park it next to a different host. Just a "safe remove"; life in the cottage doesn't even notice. And re-attach is even easier.

[**] Fret not the x8 card; the WyseExt slot is open-ended
.
 
  • Like
Reactions: Samir and labxplore

labxplore

New Member
Sep 12, 2022
16
17
3
[**] Fret not the x8 card; the WyseExt slot is open-ended
WOW! oh my... I've been looking at the board and checking routing options and... completely missed that :rolleyes:
thanks a lot! this opens a lot of possibilities now :)

The clear winner is the Dell H200e, a LSI 9200-8e equiv[**]; uses the SAS2008. E.g., [Link].
It uses just 1-2w more than the -4i, but easy growth to 8 drives. Now, here's why external is a win: Instead of some "ghetto claptrap duopoly"
That makes a lot of sense - thank you for the insight!
I'm glad I've posted here before buying all pieces - back to the drawing board!
 
  • Like
Reactions: Samir

UhClem

just another Bozo on the bus
Jun 26, 2012
439
253
63
NH, USA
From my recent testing the best I've been able to get is still that 233k/159k read/write bandwidth, ...
To which, I had replied:
I plan to unbox mine in a few days. I expect to get about 330k/200k [2c3t/1c3t]
We'll see ...
I made this estimate based on tests in my workstation (HP Z4G4 w/W-2145 CPU**) with the SSD's PCIe slot (bios-)limited to Gen2; and factoring those results relative to the Passmark (single-thread) #s (W2145:J5005) of 2609:1205. [Yes, Passmark isn't gospel; but "close enuf fer guvmnt work"?]

Well, my real #s (on 5070ext) are 278k/121k:( (numjobs=4,iodepth=3/nj=3,iod=3)

Read script, onetst.sh:
Code:
#!/bin/bash

[ $# -ne 3 ] && echo Usage $0 numjobs iodepth BLOCKSIZE && exit 1

fio --name=onetst \
    --filename=Test \
    --filesize=20g --rw=randread --bs=$3 --direct=1 --overwrite=0 \
    --numjobs=$1 --iodepth=$2 --time_based=1 --runtime=10 \
    --ioengine=libaio --gtod_reduce=1 --group_reporting
Write script, onetstw.sh:
Code:
#!/bin/bash

[ $# -ne 3 ] && echo Usage $0 numjobs iodepth BLOCKSIZE && exit 1

fio --name=onetstw \
    --filename=Test \
    --filesize=20g --rw=randwrite --bs=$3 --direct=1 --overwrite=0 \
    --numjobs=$1 --iodepth=$2 --time_based=1 --runtime=10 \
    --ioengine=libaio --gtod_reduce=1 --group_reporting
Read results:
Code:
@tiny:/mnk [ 1284 ] # onetst.sh 4 3 4k
onetst: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=3
...
fio-3.30
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=1088MiB/s][r=279k IOPS][eta 00m:00s]
onetst: (groupid=0, jobs=4): err= 0: pid=9918: Mon Jul 17 13:59:06 2023
  read: IOPS=278k, BW=1087MiB/s (1140MB/s)(10.6GiB/10001msec)
   bw (  MiB/s): min= 1086, max= 1090, per=100.00%, avg=1088.22, stdev= 0.45, samples=76
   iops        : min=278072, max=279150, avg=278584.74, stdev=115.00, samples=76
  cpu          : usr=31.44%, sys=68.18%, ctx=18199, majf=0, minf=50
  IO depths    : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2782328,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=3

Run status group 0 (all jobs):
   READ: bw=1087MiB/s (1140MB/s), 1087MiB/s-1087MiB/s (1140MB/s-1140MB/s), io=10.6GiB (11.4GB), run=10001-10001msec

Disk stats (read/write):
  nvme0n1: ios=2754733/0, merge=0/0, ticks=41651/0, in_queue=41651, util=99.01%
Write results:
Code:
@tiny:/mnk [ 1326 ] # onetstw.sh 3 3 4k
onetstw: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=3
...
fio-3.30
Starting 3 processes
Jobs: 3 (f=3): [w(3)][100.0%][w=475MiB/s][w=122k IOPS][eta 00m:00s]
onetstw: (groupid=0, jobs=3): err= 0: pid=10311: Mon Jul 17 15:06:53 2023
  write: IOPS=121k, BW=474MiB/s (498MB/s)(4746MiB/10002msec); 0 zone resets
   bw (  KiB/s): min=481344, max=489144, per=100.00%, avg=486326.32, stdev=717.49, samples=57
   iops        : min=120336, max=122286, avg=121581.58, stdev=179.37, samples=57
  cpu          : usr=13.81%, sys=61.09%, ctx=1286106, majf=0, minf=26
  IO depths    : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1214906,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=3

Run status group 0 (all jobs):
  WRITE: bw=474MiB/s (498MB/s), 474MiB/s-474MiB/s (498MB/s-498MB/s), io=4746MiB (4976MB), run=10002-10002msec

Disk stats (read/write):
  nvme0n1: ios=0/1202028, merge=0/0, ticks=0/19113, in_queue=19113, util=99.01%
While my read result is somewhat disappointing, my write result is especially so. Note that there is a fair amount of "cpu" unused, but I know the SSD itself has more IOPS in the tank. I have some ideas ...

** very similar to YOUR Dell_5820 w/W-2140b. You could test the 900P there. bios-limiting the slot to Gen2... but, why? :)
 
  • Like
Reactions: Samir

applepi

Member
Jun 15, 2013
86
67
18
** very similar to YOUR Dell_5820 w/W-2140b. You could test the 900P there. bios-limiting the slot to Gen2... but, why? :)
I have a dual E5-2670v4 blade with a U.2 900P drive in it and I'm able to push 596K/454k with 4 jobs 3 iodepth and 4k blocks. So we are no where near the drive limit lol.

I think the PCI-E 2.0 is a J5005 SOC limitation.

I think this this box is just going to get a quad gig NIC and call it a day. As for my pile of Intel 900P AICs that I bought because I thought they were U.2... maybe there is some cursed way to use USB 3.2Gen2 NVME adapters using a m.2 to pcie 4x riser to get the 900p working as an external drive.
 
  • Like
Reactions: Samir