Minisforum MS-01

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Zerauskire

New Member
Apr 12, 2024
4
0
1
Can you link to the YouTube guide you followed?

Do you have video output?

And do you have nested virtualization? I.e. can you use WSL2 from within your Win11 VM?
Here’s the link:

I don’t really follow anyone or anything on YouTube but I have found his videos very informative and detailed so I actually followed him.

Yes. I have video output.

As for the nested virtualization, sorry no. I don’t do that currently. It could work but I have no clue.
 
Last edited:

Henry2

New Member
May 21, 2024
4
1
3
I'm hoping someone can help me. I just bought this card: Amazon.com
and put it in my MS-01. It shows up and appears to be working from the command line (Proxmox). But as soon as it's in, I lose all network connectivity. I tried a different card (9207-8e) and exactly the same thing happened. No card and the network is fine, card in the PCI slot, no networking. Any help at all would be greatly appreciated. I'm pretty new to this and I'm not sure where to start.
How has this card been working for you? I also just bought that card. I am having trouble flashing it to IT mode using my MS-01. Can't boot from FreeDOS USB.
 

JamfSlayer

New Member
May 10, 2024
10
2
3
There's a new bios update 1.26 for the MS-01. Waiting to see if it fixes that random reboot in Windows Server 2025 and back to the bios screen for no reason, when it reboots. Hope this fixes that.
 

kevin771

New Member
Oct 28, 2024
5
1
1
Has anyone plugged their Minisforum MS-01 into their TV via HDMI? I did, and I get video, but no audio. However, audio does work if I plug some headphones into the 3.5mm jack.

But why doesn't audio work through the TV and HDMI? Other laptops I have, it works just fine.

Note, I am running Windows 10 and I have installed all of the latest drivers from the Minisforum site:

Attached here is a screen shot of my Minisforum MS-01 when plugged into my TV via HDMI. Just a generic display device and the Realtec audio, saying that nothing is plugged into the 3.5mm jack, which for this screen shot there wasn't, so that is fine and correct. Like I said, I can plug in my headphones and get audio.
ms-01audio.png

Compare that to when I plug in my laptop. The TV gets added as a monitor and audio.
audio.png
 

kevin771

New Member
Oct 28, 2024
5
1
1
Has anyone plugged their Minisforum MS-01 into their TV via HDMI? I did, and I get video, but no audio. However, audio does work if I plug some headphones into the 3.5mm jack.

But why doesn't audio work through the TV and HDMI? Other laptops I have, it works just fine.

Note, I am running Windows 10 and I have installed all of the latest drivers from the Minisforum site:

Attached here is a screen shot of my Minisforum MS-01 when plugged into my TV via HDMI. Just a generic display device and the Realtec audio, saying that nothing is plugged into the 3.5mm jack, which for this screen shot there wasn't, so that is fine and correct. Like I said, I can plug in my headphones and get audio.
View attachment 39761

Compare that to when I plug in my laptop. The TV gets added as a monitor and audio.
View attachment 39762
Not sure where I thought I saw/read it, but I decided to try Windows 11 and sure enough, all hardware is properly identified in Windows Device Manager, and I have sound through my TV when plugged in via HDMI.
 

kevin771

New Member
Oct 28, 2024
5
1
1
Does anyone have an MS-01 that doesn't repeatedly crash?

I have read several forum posts like the ones here:

I have updated my BIOS to the latest 1.26 version. And I am running Windows 11, not Proxmox. So I was hoping the compatibility and stability wouldn't be an issue. Windows 11 is able to identify all the hardware and has drivers for them, but the machine still randomly crashes.

I also thought that is might be related to just the Intel Core i9-13900H processor, but I have seen other people report crashes with the Core i5 processor as well.

After less than a month, I am returning it.

The features and price are great, but not worth it if it's going to crash.
 

anewsome

Active Member
Mar 15, 2024
130
132
43
Does anyone have an MS-01 that doesn't repeatedly crash?

I have read several forum posts like the ones here:

I have updated my BIOS to the latest 1.26 version. And I am running Windows 11, not Proxmox. So I was hoping the compatibility and stability wouldn't be an issue. Windows 11 is able to identify all the hardware and has drivers for them, but the machine still randomly crashes.

I also thought that is might be related to just the Intel Core i9-13900H processor, but I have seen other people report crashes with the Core i5 processor as well.

After less than a month, I am returning it.

The features and price are great, but not worth it if it's going to crash.
I too was having a hell of a time keeping the MS01 up and running. After way too much tinkering and firmware upgrades, they finally feel stable. I have 3 months of uptime on most of the nodes in the 5 node cluster. I think the tweak that made the most difference was setting the memory speed to 4400. Since that change, I haven't seen a crash. I'd call 3 months of uptime fairly reliable and I expect it to be stable from here on out.
 

kevin771

New Member
Oct 28, 2024
5
1
1
I too was having a hell of a time keeping the MS01 up and running. After way too much tinkering and firmware upgrades, they finally feel stable. I have 3 months of uptime on most of the nodes in the 5 node cluster. I think the tweak that made the most difference was setting the memory speed to 4400. Since that change, I haven't seen a crash. I'd call 3 months of uptime fairly reliable and I expect it to be stable from here on out.
Good to hear. Thanks for the reply. Wish I could have tried that sooner, but only have a few days left before I'm unable to return it.
 

berkyl

New Member
Jan 13, 2025
25
3
3
I have 4 ms-01 with each having:
- 96GB Crucial 5200MHz RAM
- 25Gbe Mellanox
- 2x Kingston KC3000 2TB for storage
- 1x Transcend TS128GMTE110S 128GB for Proxmox

That being said I regret having bought those units as I have a really hard time getting an acceptable speed on the NVMe.
I get around 15-45MB/s write on socket 1 and 2 even though they are running at PCIe 4.0 x4 and PCie 3.0 x4 ...

I tried other consumer NVME disks (Seagate FireCuda, Samsung 970 Pro) both with similar results.
After upgrading the BIOS I got 1.1 to 1.5 GB/s but after a reboot it went back to the old values.

My setup was meant as a more elaborate homelab to learn stuff but currently I feel like I wasted a lot of time and money on those crappy MS-01.

Does someone has similar results on NVMe writes?
 
  • Haha
Reactions: pimposh

meyergru

New Member
Jul 12, 2020
26
1
3
You must be doing something wrong. The speed is not dependent on any BIOS versions, because the Linux drivers do not use the BIOS.

You did not tell any specifics, but there are many variables that come into play when doing complex setups. Think of:

1. Filesystem choice under Proxmox: LVM vs. ZFS.
2. If ZFS: Compression on or off?
3. Since you have 4 hosts: Did you setup as a cluster with replication? In that case, writes will run over the network, too.
4. Disk emulation in the VM guest: IDE, SATA, SCSI or Virtio?

It is very unlikely that the hardware is the culprit here.
 

berkyl

New Member
Jan 13, 2025
25
3
3
I actually hope that I'm doing something wrong as this would indicate it's a solvable problem.

1. Proxmox runs on a separate drive on LVM
2. I used fio for testing with and without partition (ext4)
3. I setup everything for ceph but as the performance was lacking I invested a lot of time debugging.
4. I checked the performance on the host, no VMs in use, default proxmox setup without additional packages except intel-microcode and fio

For further explanation and traceability, I try to write down some debugging steps I did so far:

Installed nvme devices:
Bash:
root@node4:~# nvme list -v
Subsystem        Subsystem-NQN                                                                                    Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys2     nqn.2018-05.com.example:nvme:nvm-subsystem-OI923340854                                           nvme2
nvme-subsys1     nqn.2020-04.com.kingston:nvme:nvm-subsystem-sn-50026B7686B00F32                                  nvme1
nvme-subsys0     nqn.2020-04.com.kingston:nvme:nvm-subsystem-sn-50026B7686ED4E14                                  nvme0

Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces   
-------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
nvme2    I923340854           TS128GMTE110S                            VC3S5007 pcie   0000:5a:00.0   nvme-subsys2 nvme2n1
nvme1    50026B7686B00F32     KINGSTON SKC3000D2048G                   EIFK31.6 pcie   0000:59:00.0   nvme-subsys1 nvme1n1
nvme0    50026B7686ED4E14     KINGSTON SKC3000D2048G                   EIFK31.7 pcie   0000:02:00.0   nvme-subsys0 nvme0n1

Device       Generic      NSID     Usage                      Format           Controllers   
------------ ------------ -------- -------------------------- ---------------- ----------------
/dev/nvme2n1 /dev/ng2n1   1        128.04  GB / 128.04  GB    512   B +  0 B   nvme2
/dev/nvme1n1 /dev/ng1n1   1          2.05  TB /   2.05  TB      4 KiB +  0 B   nvme1
/dev/nvme0n1 /dev/ng0n1   1          2.05  TB /   2.05  TB      4 KiB +  0 B   nvme0
Linkspeed slot 1 (16GT/s, x4 meaning PCIe 4.0 x4:
Bash:
root@node4:~# lspci -vv -nn -s 0000:02:00.0|grep Lnk
        LnkCap:    Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl:    ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
        LnkSta:    Speed 16GT/s, Width x4
        LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
        LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
        LnkCtl3: LnkEquIntrruptEn- PerformEqu-
Linkspeed slot 2 (8GT/s, x4 meaning PCIe 3.0 x4:
Bash:
root@node4:~# lspci -vv -nn -s 0000:59:00.0|grep Lnk
        LnkCap:    Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us
        LnkCtl:    ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
        LnkSta:    Speed 8GT/s (downgraded), Width x4
        LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-
        LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis-
        LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
        LnkCtl3: LnkEquIntrruptEn- PerformEqu-
Speedtests with 4KB blocksize, 1QD, filesync:
Bash:
root@node4:~# fio --filename=/dev/nvme0n1 --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1716KiB/s][w=429 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=360223: Mon Jan 13 13:27:24 2025
  write: IOPS=471, BW=1888KiB/s (1933kB/s)(111MiB/60001msec); 0 zone resets
    clat (usec): min=556, max=10903, avg=2118.78, stdev=795.20
     lat (usec): min=557, max=10904, avg=2118.83, stdev=795.21
    clat percentiles (usec):
     |  1.00th=[  619],  5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1942],
     | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2024], 60.00th=[ 2089],
     | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2278], 95.00th=[ 2376],
     | 99.00th=[ 5997], 99.50th=[ 9634], 99.90th=[10028], 99.95th=[10159],
     | 99.99th=[10683]
   bw (  KiB/s): min= 1568, max= 2544, per=100.00%, avg=1890.62, stdev=145.03, samples=119
   iops        : min=  392, max=  636, avg=472.66, stdev=36.26, samples=119
  lat (usec)   : 750=1.73%, 1000=0.01%
  lat (msec)   : 2=39.31%, 4=57.95%, 10=0.91%, 20=0.09%
  cpu          : usr=0.04%, sys=0.94%, ctx=56640, majf=0, minf=13
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,28314,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=1888KiB/s (1933kB/s), 1888KiB/s-1888KiB/s (1933kB/s-1933kB/s), io=111MiB (116MB), run=60001-60001msec

Disk stats (read/write):
  nvme0n1: ios=80/56536, merge=0/0, ticks=40/59401, in_queue=62522, util=98.59%
same test with 32QD:
Bash:
root@node4:~# fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1964KiB/s][w=491 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=360890: Mon Jan 13 13:30:03 2025
  write: IOPS=511, BW=2047KiB/s (2096kB/s)(120MiB/60060msec); 0 zone resets
    slat (nsec): min=925, max=61410, avg=10563.54, stdev=5259.40
    clat (msec): min=9, max=130, avg=62.52, stdev=10.43
     lat (msec): min=9, max=130, avg=62.53, stdev=10.43
    clat percentiles (msec):
     |  1.00th=[   17],  5.00th=[   56], 10.00th=[   57], 20.00th=[   58],
     | 30.00th=[   59], 40.00th=[   60], 50.00th=[   61], 60.00th=[   63],
     | 70.00th=[   65], 80.00th=[   70], 90.00th=[   75], 95.00th=[   80],
     | 99.00th=[   90], 99.50th=[   96], 99.90th=[  111], 99.95th=[  116],
     | 99.99th=[  125]
   bw (  KiB/s): min= 1728, max= 3296, per=99.96%, avg=2046.73, stdev=182.88, samples=120
   iops        : min=  432, max=  824, avg=511.68, stdev=45.72, samples=120
  lat (msec)   : 10=0.01%, 20=1.24%, 50=3.15%, 100=95.26%, 250=0.35%
  cpu          : usr=0.35%, sys=1.11%, ctx=30705, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,30732,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=2047KiB/s (2096kB/s), 2047KiB/s-2047KiB/s (2096kB/s-2096kB/s), io=120MiB (126MB), run=60060-60060msec

Disk stats (read/write):
  nvme0n1: ios=76/30648, merge=0/0, ticks=566/1914941, in_queue=1915506, util=99.94%
root@node4:~# smartctl -a /dev/nvme0n1
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.8.12-5-pve] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number: KINGSTON SKC3000D2048G
Serial Number: 50026B7686ED4E14
Firmware Version: EIFK31.7
PCI Vendor/Subsystem ID: 0x2646
IEEE OUI Identifier: 0x0026b7
Total NVM Capacity: 2,048,408,248,320 [2.04 TB]
Unallocated NVM Capacity: 0
Controller ID: 1
NVMe Version: 1.4
Number of Namespaces: 1
Namespace 1 Size/Capacity: 2,048,408,248,320 [2.04 TB]
Namespace 1 Formatted LBA Size: 4096
Namespace 1 IEEE EUI-64: 0026b7 686ed4e145
Local Time is: Mon Jan 13 13:31:17 2025 CET
Firmware Updates (0x12): 1 Slot, no Reset required
Optional Admin Commands (0x0017): Security Format Frmw_DL Self_Test
Optional NVM Commands (0x005d): Comp DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
Log Page Attributes (0x08): Telmtry_Lg
Maximum Data Transfer Size: 512 Pages
Warning Comp. Temp. Threshold: 84 Celsius
Critical Comp. Temp. Threshold: 89 Celsius

Supported Power States
St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat
0 + 8.80W - - 0 0 0 0 0 0
1 + 7.10W - - 1 1 1 1 0 0
2 + 5.20W - - 2 2 2 2 0 0
3 - 0.0620W - - 3 3 3 3 2500 7500
4 - 0.0620W - - 4 4 4 4 2500 7500

Supported LBA Sizes (NSID 0x1)
Id Fmt Data Metadt Rel_Perf
0 - 512 0 2
1 + 4096 0 1

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning: 0x00
Temperature: 33 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 78,584 [40.2 GB]
Data Units Written: 5,615,167 [2.87 TB]
Host Read Commands: 364,331
Host Write Commands: 22,847,793
Controller Busy Time: 35
Power Cycles: 4
Power On Hours: 258
Unsafe Shutdowns: 1
Media and Data Integrity Errors: 0
Error Information Log Entries: 10
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Temperature Sensor 2: 71 Celsius

Error Information (NVMe Log 0x01, 16 of 63 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS
0 10 0 0x0010 0x4004 0x028 0 0 -
1 9 0 0x000f 0x4004 - 0 0 -

To get a hold of the problems I also tried a blocksize of 4KB:
Bash:
root@node4:~# nvme id-ns -H /dev/nvme0n1|grep "Data Size"
LBA Format  0 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good
LBA Format  1 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0x1 Better (in use)
Do you need any further reports?
 
Last edited:

meyergru

New Member
Jul 12, 2020
26
1
3
The fio tests seem to indicate low-level speeds of 2 GByte/s, so what is the problem?

I have no experience with Ceph, but considering its complexity, I would argue that if your slow throughput is when you measure with Ceph, you will have to fiddle with its knobs.
 

meyergru

New Member
Jul 12, 2020
26
1
3
It's suffix, but you are right, you are only getting 2 MByte/s. That is because you specified --bs=4k, which is way lower than the actual block size of your SSD. You will see a huge increase if you leave out --sync=1 or specify --bs=128k. Again, nothing wrong with your hardware. BTW: Parallelizing lots of small write requests that will induce read-modify-write cycles will not make it any faster...

And you can use --filename=x if you want to test through the filesystem instead of the raw device. With that, you can deduct the overhead of that. I bet Ceph eats up a lot.
 

berkyl

New Member
Jan 13, 2025
25
3
3
Are you from/with minisforum?

Did you see the output of `nvme list -v` or `nvme id-ns -H /dev/nvme0n1|grep "Data Size"`? The SSD has a blocksize of 4K.

The sync is there as I would like to test the actual drive speed, but I checked it with a BS of 128K and without sync for you:
Bash:
root@node4:~# fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --rw=write --bs=128k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=3205MiB/s][w=25.6k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=378464: Mon Jan 13 14:44:57 2025
  write: IOPS=24.3k, BW=3036MiB/s (3183MB/s)(178GiB/60002msec); 0 zone resets
    slat (usec): min=5, max=125, avg= 9.29, stdev= 1.68
    clat (usec): min=378, max=9319, avg=1308.09, stdev=218.80
     lat (usec): min=391, max=9328, avg=1317.38, stdev=218.81
    clat percentiles (usec):
     |  1.00th=[ 1057],  5.00th=[ 1221], 10.00th=[ 1221], 20.00th=[ 1221],
     | 30.00th=[ 1237], 40.00th=[ 1319], 50.00th=[ 1319], 60.00th=[ 1319],
     | 70.00th=[ 1319], 80.00th=[ 1336], 90.00th=[ 1336], 95.00th=[ 1352],
     | 99.00th=[ 1434], 99.50th=[ 1516], 99.90th=[ 6259], 99.95th=[ 6390],
     | 99.99th=[ 7177]
   bw (  MiB/s): min= 2757, max= 3354, per=100.00%, avg=3037.33, stdev=119.29, samples=119
   iops        : min=22056, max=26832, avg=24298.61, stdev=954.31, samples=119
  lat (usec)   : 500=0.01%, 750=0.09%, 1000=0.66%
  lat (msec)   : 2=98.81%, 4=0.33%, 10=0.11%
  cpu          : usr=9.07%, sys=17.27%, ctx=1455033, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1457285,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=3036MiB/s (3183MB/s), 3036MiB/s-3036MiB/s (3183MB/s-3183MB/s), io=178GiB (191GB), run=60002-60002msec

Disk stats (read/write):
  nvme0n1: ios=76/1454566, merge=0/0, ticks=35/1907779, in_queue=1907813, util=99.92%
As you can see, the speed is still half of what it should be, you can download the datasheet here: https://www.kingston.com/datasheets/KC3000_en.pdf
 

berkyl

New Member
Jan 13, 2025
25
3
3
and for reference, the much lower spec'd Toshiba on slot3 with PCIe 2.0, tested with BS of 4K and sync:

Bash:
[root@node4:~# fio --filename=/dev/nvme2n1 --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=13.7MiB/s][w=3516 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=379628: Mon Jan 13 14:49:41 2025
  write: IOPS=16.2k, BW=63.4MiB/s (66.5MB/s)(3805MiB/60019msec); 0 zone resets
    slat (nsec): min=779, max=101325, avg=1286.23, stdev=1645.93
    clat (usec): min=151, max=32460, avg=1970.36, stdev=2048.98
     lat (usec): min=174, max=32462, avg=1971.65, stdev=2049.10
    clat percentiles (usec):
     |  1.00th=[  186],  5.00th=[  192], 10.00th=[  200], 20.00th=[  217],
     | 30.00th=[  289], 40.00th=[  947], 50.00th=[ 2024], 60.00th=[ 2573],
     | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3064], 95.00th=[ 6587],
     | 99.00th=[10028], 99.50th=[12387], 99.90th=[15795], 99.95th=[16909],
     | 99.99th=[22152]
   bw (  KiB/s): min=13312, max=84904, per=100.00%, avg=65372.84, stdev=25724.67, samples=119
   iops        : min= 3328, max=21226, avg=16343.23, stdev=6431.16, samples=119
  lat (usec)   : 250=23.90%, 500=10.85%, 750=2.90%, 1000=3.00%
  lat (msec)   : 2=9.17%, 4=43.66%, 10=5.52%, 20=0.99%, 50=0.02%
  cpu          : usr=0.96%, sys=3.08%, ctx=535076, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,973994,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=3805MiB (3989MB), run=60019-60019msec

Disk stats (read/write):
  nvme2n1: ios=84/974348, merge=0/55, ticks=425/1913711, in_queue=1914163, util=100.00%
On the same host ofc
 

meyergru

New Member
Jul 12, 2020
26
1
3
No, I am not from minisforum.

The real physical blocksize of SSDs is way beyond the reported blocksize of 4k, that is common knowledge. You can see that the reported speeds increase by a factor > 1000 just by the modifications I proposed.

Then, there is the problem that your test writes about 60s * 3 GByte/s = 180 GByte. Especially if you run such tests back-to-back, that will most likely be more than the available cache in your SSD. Thus, the device has to actually write the data to flash. Therefore, you will not get the advertised speeds, which is also common knowledge. Advertised write speeds are not the same as sustained write speeds. Read speeds are another story.

The KC3000 apparently does not even have a RAM cache (at least Kingston does not specify that), but instead has to resort to the "pseudo" SLC portion of the disk. Usually, there is a speed drop after a certain amount of write activity, namely, after the cache has been exhausted.

Just look at the second AIDA64 graph on this page to see that speed drops after a certain amount of data has been pushed to the drive:

This is much better with "pro" drives, which have better/faster flash memory, more channels in the controller and RAM cache on top. You can see a comparison of some drives here:


I use my KC3000 purely as a graveyard for my games, i.e. "write once, read mostly". Even for that, the KC3000 is really bad. My first specimen lost its speed after a few months, because of decay. It appears that Kingston forgot to implement a background refresh routine, so that data decayed and could only be read with heavy error correction. This takes times and thus speeds were abysmal, until I used a tool to refresh the data. Kingston replaced my drive under warranty. See also: https://www.reddit.com/r/pcmasterrace/comments/1f1piwf
I would suggest that you replace those drives by "pro" types for your type of application. Also, when you change them, do yourself a favor and use ZFS.
 
Last edited:

berkyl

New Member
Jan 13, 2025
25
3
3
Are you trying to help or just trying to defend the MS-01 for whatever reason? Or why are you "it's common knowledging"-me? It's not about speed drops on SSD's, it's about the fact that I have poor write speed on 2 PCIe ports for whatever reason there is.

Trying to show you that it's not just the NVMe drive, the same tests on a Seagate Firecuda 530R in slot 1:

Bash:
root@node1:~# fio --filename=/dev/nvme1n1 --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1932KiB/s][w=483 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=257436: Mon Jan 13 15:30:59 2025
  write: IOPS=498, BW=1994KiB/s (2042kB/s)(117MiB/60059msec); 0 zone resets
    slat (nsec): min=997, max=20189, avg=1824.09, stdev=637.67
    clat (msec): min=5, max=107, avg=64.19, stdev=15.16
     lat (msec): min=5, max=107, avg=64.19, stdev=15.16
    clat percentiles (msec):
     |  1.00th=[   31],  5.00th=[   56], 10.00th=[   56], 20.00th=[   56],
     | 30.00th=[   57], 40.00th=[   57], 50.00th=[   59], 60.00th=[   59],
     | 70.00th=[   63], 80.00th=[   64], 90.00th=[   94], 95.00th=[   97],
     | 99.00th=[  102], 99.50th=[  102], 99.90th=[  106], 99.95th=[  107],
     | 99.99th=[  108]
   bw (  KiB/s): min= 1712, max= 2288, per=99.95%, avg=1993.87, stdev=119.61, samples=120
   iops        : min=  428, max=  572, avg=498.47, stdev=29.90, samples=120
  lat (msec)   : 10=0.01%, 20=0.01%, 50=2.05%, 100=95.69%, 250=2.24%
  cpu          : usr=0.08%, sys=0.12%, ctx=26084, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,29939,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=117MiB (123MB), run=60059-60059msec

Disk stats (read/write):
  nvme1n1: ios=78/29831, merge=0/0, ticks=832/1915645, in_queue=1916477, util=99.90%
As you see, it's the same poor result as the KC3000.

meanwhile I get 171K IOPS running the same test on Toshiba TSC that is capped by the PCIe 2.0 and you are telling me those results are normal? I would like to refrain from putting the same KC3000/Firecuda 530R into another just to proof a point as all 4 ms-01 are showing the same behaviour.
 

meyergru

New Member
Jul 12, 2020
26
1
3
No need to get angry. You can believe me or not. All I am saying is that there is two problems with your methodology:

1. You write 4k blocks and sync after each and every one of them. We (read: you) have verified my claim that this alone results in a speed increase of > 1000, which now gives around 3 GByte/s, which is around half of what Kingston claims to be their maximum speeds. I have explained why this is so and why it is useless for testing sustained bulk write speeds with --bs=4k and --sync=1.

2. SSDs of different types operate differently w/r to how they write data. "Pro" types can achieve more sustained speeds and often have additional RAM cache, whereas "consumer" types do not have those. The KC3000 is a consumer type of SSD, whose write speeds drop to ~1 GByte after a short while. That is why in your tests, you get a "mixed" result of 3 GByte/s (max: 5 GByte/s, min: 1 GByte/s). You can verify this claim as well when you stop testing for a few minutes and then use an additional parameter of "--size=1g" or a shorter test time - your speed will likely increase to the specified max write speed (which seems to be more like 5 GByte/s than the 7 GByte/s read speed). You also have proven that the - in your view "slower" Toshiba is actually faster in sustained writes.

Both of these points show that you seem not to understand how SSDs work and make you look for problems in places were probably none exist (i.e.: the MS-01). Again: (advertised) SSD read speeds are not the same as write speeds, much less sustained write speeds.

P.S.: You should test the Firecuda with sync=0 and bs=128k.It seems a lot faster than the KC3000 in sustained writes:


 
Last edited:

berkyl

New Member
Jan 13, 2025
25
3
3
I'm not angry, just somehow frustrated because I'm trying my best to show you that something can't be right.

See the results for the Firecuda 530R with 128K and no sync:
Bash:
root@node1:~# fio --filename=/dev/nvme1n1 --ioengine=libaio --direct=1 --rw=write --bs=128k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1069MiB/s][w=8553 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=263592: Mon Jan 13 16:07:54 2025
  write: IOPS=8844, BW=1106MiB/s (1159MB/s)(64.8GiB/60004msec); 0 zone resets
    slat (nsec): min=5836, max=30746, avg=8708.57, stdev=1588.26
    clat (usec): min=384, max=15737, avg=3609.10, stdev=795.91
     lat (usec): min=403, max=15744, avg=3617.81, stdev=795.79
    clat percentiles (usec):
     |  1.00th=[ 1090],  5.00th=[ 3163], 10.00th=[ 3228], 20.00th=[ 3294],
     | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3589],
     | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 3982], 95.00th=[ 4080],
     | 99.00th=[ 6980], 99.50th=[ 9896], 99.90th=[12125], 99.95th=[12125],
     | 99.99th=[12125]
   bw (  MiB/s): min=  328, max= 2491, per=100.00%, avg=1106.30, stdev=193.89, samples=119
   iops        : min= 2628, max=19930, avg=8850.37, stdev=1551.14, samples=119
  lat (usec)   : 500=0.01%, 750=0.29%, 1000=0.45%
  lat (msec)   : 2=0.69%, 4=89.85%, 10=8.46%, 20=0.25%
  cpu          : usr=4.02%, sys=5.54%, ctx=530351, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,530713,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1106MiB/s (1159MB/s), 1106MiB/s-1106MiB/s (1159MB/s-1159MB/s), io=64.8GiB (69.6GB), run=60004-60004msec

Disk stats (read/write):
  nvme1n1: ios=76/529810, merge=0/0, ticks=53/1913122, in_queue=1913175, util=99.90%