Minisforum MS-01

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

anewsome

Active Member
Mar 15, 2024
130
131
43
I have 4 ms-01 with each having:
- 96GB Crucial 5200MHz RAM
- 25Gbe Mellanox
- 2x Kingston KC3000 2TB for storage
- 1x Transcend TS128GMTE110S 128GB for Proxmox

Does someone has similar results on NVMe writes?
I have a similar setup, without the 25Gbe adapter. I have

  • 5x MS01
  • Same memory
  • 15x Crucial T500 2TB (3 per node)
  • Also running Proxmox
  • Also running Ceph (using 10 OSDs, 2 NVME drives per node)
  • The slowest NVME slot is used for booting each node
Ceph is configured with dedicated 10gb nic for frontend and the other 10gb nic is for Ceph backend. All in all I think it's a very similar setup to what you have. The performance I get is about right on the mark with what I'd expect from consumer drives. Note that consumer drives perform nicely until the cache is filled, and then performance falls dramatically. To avoid the falloff, you'll need enterprise NVME drives. You can see this in my rados-bench chart:

crucial-2024-11-21_09.03-avg_bw.jpg
 

berkyl

New Member
Jan 13, 2025
25
3
3
I have a similar setup, without the 25Gbe adapter. I have

  • 5x MS01
  • Same memory
  • 15x Crucial T500 2TB (3 per node)
  • Also running Proxmox
  • Also running Ceph (using 10 OSDs, 2 NVME drives per node)
  • The slowest NVME slot is used for booting each node
Ceph is configured with dedicated 10gb nic for frontend and the other 10gb nic is for Ceph backend. All in all I think it's a very similar setup to what you have. The performance I get is about right on the mark with what I'd expect from consumer drives. Note that consumer drives perform nicely until the cache is filled, and then performance falls dramatically. To avoid the falloff, you'll need enterprise NVME drives. You can see this in my rados-bench chart:

View attachment 41306
I'm not sure your benchmark is actually on the mark. It's about what I get from 3 Intel NUCs, single NVMe with 2.5Gbe networking for frontend/proxmox and dedicated 18Gbe networking on the thunderbolt in a mesh network.
Do you use jumbo frames and a dedicated switch?

Would you mind fio'ing your nvme on the host on slot 1 or 2 with
`fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test`?
My rados bench was at around 1431MB/s but with very low IOPS, that's why I started debugging.
 

meyergru

New Member
Jul 12, 2020
26
1
3
That drive seems really slow. Even the sustained write speed of that type should be 3-4 GByte/s. Note that this is totally different from the other results, which are perfectly explainable.

I could think of possible causes:

1. M.2 slot with slower PCIe or less lanes?
2. Counterfeit product? Those products are often faked: https://www.reddit.com/r/Seagate/comments/ivwql7 3. Obviously, the same decay problems as with the KC3000 riddle the 530R, as it has the same Physon E18 controller:



P.S.: Do not use fio on a raw disk unless it is not being used for a filesystem! And as I said, tests with bs=4k and sync=1 will give results way below expectations.
 

berkyl

New Member
Jan 13, 2025
25
3
3
I put the KC3000 in another host, ran ubuntu live and executed the same tests again:

Bash:
ubuntu@ubuntu:~$ sudo fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test~
test~: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=13.0MiB/s][w=3319 IOPS][eta 00m:00s]
test~: (groupid=0, jobs=1): err= 0: pid=8404: Mon Jan 13 15:57:16 2025
  write: IOPS=3279, BW=12.8MiB/s (13.4MB/s)(769MiB/60009msec); 0 zone resets
    slat (nsec): min=1057, max=241739, avg=6972.64, stdev=3547.29
    clat (usec): min=6970, max=30484, avg=9748.84, stdev=820.66
     lat (usec): min=7023, max=30489, avg=9755.82, stdev=820.83
    clat percentiles (usec):
     |  1.00th=[ 9241],  5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9241],
     | 30.00th=[ 9241], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765],
     | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10683],
     | 99.00th=[11469], 99.50th=[12518], 99.90th=[21103], 99.95th=[23462],
     | 99.99th=[25297]
   bw (  KiB/s): min=12024, max=13352, per=100.00%, avg=13126.37, stdev=327.03, samples=119
   iops        : min= 3006, max= 3338, avg=3281.59, stdev=81.76, samples=119
  lat (msec)   : 10=68.71%, 20=31.17%, 50=0.12%
  cpu          : usr=1.39%, sys=4.10%, ctx=196949, majf=0, minf=10
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,196808,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=769MiB (806MB), run=60009-60009msec

Disk stats (read/write):
  nvme0n1: ios=107/196410, sectors=6000/1571280, merge=0/0, ticks=24/1913873, in_queue=1913897, util=99.93%
Bash:
ubuntu@ubuntu:~$ sudo fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --rw=write --bs=128k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test~
test~: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=6452MiB/s][w=51.6k IOPS][eta 00m:00s]
test~: (groupid=0, jobs=1): err= 0: pid=8731: Mon Jan 13 16:01:05 2025
  write: IOPS=51.3k, BW=6409MiB/s (6720MB/s)(376GiB/60001msec); 0 zone resets
    slat (usec): min=8, max=110, avg=10.50, stdev= 2.00
    clat (usec): min=360, max=8690, avg=613.49, stdev=278.31
     lat (usec): min=373, max=8699, avg=623.98, stdev=278.33
    clat percentiles (usec):
     |  1.00th=[  578],  5.00th=[  578], 10.00th=[  578], 20.00th=[  578],
     | 30.00th=[  586], 40.00th=[  586], 50.00th=[  603], 60.00th=[  603],
     | 70.00th=[  603], 80.00th=[  603], 90.00th=[  611], 95.00th=[  619],
     | 99.00th=[  668], 99.50th=[ 1467], 99.90th=[ 5735], 99.95th=[ 5735],
     | 99.99th=[ 5800]
   bw (  MiB/s): min= 6283, max= 6482, per=100.00%, avg=6412.30, stdev=50.55, samples=119
   iops        : min=50264, max=51862, avg=51298.42, stdev=404.42, samples=119
  lat (usec)   : 500=0.07%, 750=99.14%, 1000=0.06%
  lat (msec)   : 2=0.39%, 4=0.07%, 10=0.27%
  cpu          : usr=20.63%, sys=40.69%, ctx=1562149, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3076262,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=6409MiB/s (6720MB/s), 6409MiB/s-6409MiB/s (6720MB/s-6720MB/s), io=376GiB (403GB), run=60001-60001msec

Disk stats (read/write):
  nvme0n1: ios=104/3071062, sectors=5232/786191872, merge=0/0, ticks=19/1874271, in_queue=1874290, util=99.89%
@meyergru please stop trying to make up excuses for the poor performance of the ms-01 in it's current state and if you have nothing else to add than "counterfeit products" please stop wasting my time as I'm really trying to solve this issue without having to send back my 4 units.
 

anewsome

Active Member
Mar 15, 2024
130
131
43
I'm not sure your benchmark is actually on the mark. It's about what I get from 3 Intel NUCs, single NVMe with 2.5Gbe networking for frontend/proxmox and dedicated 18Gbe networking on the thunderbolt in a mesh network.
Do you use jumbo frames and a dedicated switch?

Would you mind fio'ing your nvme on the host on slot 1 or 2 with
`fio --filename=/dev/nvme0n1 --ioengine=libaio --direct=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=60 --time_based --group_reporting --name=test`?
My rados bench was at around 1431MB/s but with very low IOPS, that's why I started debugging.
I've run the above rados-bench test maybe 4 or 5 times. That run was just an example. Some are better than others.

I don't have a dedicated 10gb switch, my NAS and other things are on the switch as well. I'm not using jumbo frames either. I won't run the fio test to the raw device, as these are all running workloads.

I also tried a mesh network with the thunderbolt ports and using frr for the routing. I found it plenty fast but super flaky and unreliable, powering off one node would bring down the whole network (unless the TB cable was unplugged).

I'm 100% OK with the throughput and latency of the Ceph cluster. My VMs are all super responsive, backups get done in a reasonable amount of time and things mostly just work. I'm not expecting anything more out of a 10 OSD ceph pool using consumer grade NVME ssd. I think if I want more performance, I'll need enterprise nvme ssd and more OSDs overall. Of course I won't be doing any of that since memory on these is already extremely constrained at 96GB each.
 

anewsome

Active Member
Mar 15, 2024
130
131
43
And do you have nested virtualization? I.e. can you use WSL2 from within your Win11 VM?
I found nested virtualization with Hyper-V to be really difficult to get working with MS01. I was able to find the right args for processor to get it working. I now have VMs running AzureStack HCI and Hyper-V server. I don't use any PCIe passthrough to them though. Did you ever get your nested setup with passthrough working?
 

meyergru

New Member
Jul 12, 2020
26
1
3
AFAICT, the speed has already picked up by a factor of 1000 just by using the correct parameters and still you present results based on bs=4k and sync=1. Of course these are way slower than anything you would expect. IMHO, these measurements are useless.

There is only one strange result with your Firecuda.

The difference of your ubuntu machine compared to your MS-01 proxmox host is only a factor of 2 and could be because the switch took a few minutes and the cache can have flushed to slow flash memory, thus restoring cache speeds again.

Also, there can be background noise on your proxmox host that invalidates all measurements taken there. At least there is nothing left of your initial claim:

That being said I regret having bought those units as I have a really hard time getting an acceptable speed on the NVMe.
I get around 15-45MB/s write on socket 1 and 2 even though they are running at PCIe 4.0 x4 and PCie 3.0 x4 ...
I will not discuss this any further. Good luck.
 

berkyl

New Member
Jan 13, 2025
25
3
3
I've run the above rados-bench test maybe 4 or 5 times. That run was just an example. Some are better than others.

I don't have a dedicated 10gb switch, my NAS and other things are on the switch as well. I'm not using jumbo frames either. I won't run the fio test to the raw device, as these are all running workloads.

I also tried a mesh network with the thunderbolt ports and using frr for the routing. I found it plenty fast but super flaky and unreliable, powering off one node would bring down the whole network (unless the TB cable was unplugged).

I'm 100% OK with the throughput and latency of the Ceph cluster. My VMs are all super responsive, backups get done in a reasonable amount of time and things mostly just work. I'm not expecting anything more out of a 10 OSD ceph pool using consumer grade NVME ssd. I think if I want more performance, I'll need enterprise nvme ssd and more OSDs overall. Of course I won't be doing any of that since memory on these is already extremely constrained at 96GB each.
No worries, it would have just been to figure out some similarities if there are any to get some hints to further troubleshoot.

Regarding the mesh network with TB4: you have to add an ifup at the end of the interface config to get the interface up after a reboot. If you are interested I can check out my documentation and the config on my NUCs.
 

berkyl

New Member
Jan 13, 2025
25
3
3
AFAICT, the speed has already picked up by a factor of 1000 just by using the correct parameters and still you present results based on bs=4k and sync=1. Of course these are way slower than anything you would expect. IMHO, these measurements are useless.

There is only one strange result with your Firecuda.

The difference of your ubuntu machine compared to your MS-01 proxmox host is only a factor of 2 and could be because the switch took a few minutes and the cache can have flushed to slow flash memory, thus restoring cache speeds again.

Also, there can be background noise on your proxmox host that invalidates all measurements taken there. At least there is nothing left of your initial claim:



I will not discuss this any further. Good luck.
There are a lot of strange results... but think what you want and stay in your bubble.
"is only factor 2"... and you are acting as everything is alright... sure
 
Last edited:

pimposh

hardware pimp
Nov 19, 2022
371
216
43
The KC3000 apparently does not even have a RAM cache
I might be wrong on that but i think 1/2TB got 1GB and 4TB got 2GB of DRAM. Once DRAM buffer is depleted KC3000 are known to have writes capped around 3GB/s (of course not for all kind of writes)
 

berkyl

New Member
Jan 13, 2025
25
3
3
I might be wrong on that but i think 1/2TB got 1GB and 4TB got 2GB of DRAM. Once DRAM buffer is depleted KC3000 are known to have writes capped around 3GB/s (of course not for all kind of writes)
It has, according to Kingston the DRAM is Kingston P04919900E, DDR4, 1 GB DRAM per TB capacity.
 
  • Like
Reactions: pimposh

kevin771

New Member
Oct 28, 2024
5
1
1
@berkyl, from your post yesterday, I ran the following commands on my MS-A1. I had the MS-01, but returned it because it seemed unstable. My MS-A1 has not crashed on me since I got it, 42 days ago.

I have the Kingston NVMe drive that came with the system, but also added a Western Digital Blue. That is where I have an Ubuntu VM running on which I performed the test.

Bash:
sudo fio --filename=/home/kevin/fio.bin --ioengine=libaio --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=32 --runtime=20 --size=10000000000 --time_based --group_reporting --name=test

test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=72.3MiB/s][w=18.5k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6953: Wed Jan 15 12:35:10 2025
  write: IOPS=20.7k, BW=80.7MiB/s (84.7MB/s)(1615MiB/20002msec); 0 zone resets
    slat (usec): min=2, max=2898, avg= 5.10, stdev=13.21
    clat (usec): min=402, max=31287, avg=1542.57, stdev=539.29
     lat (usec): min=405, max=31368, avg=1547.67, stdev=540.42
    clat percentiles (usec):
     |  1.00th=[  824],  5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1221],
     | 30.00th=[ 1287], 40.00th=[ 1401], 50.00th=[ 1565], 60.00th=[ 1663],
     | 70.00th=[ 1729], 80.00th=[ 1795], 90.00th=[ 1909], 95.00th=[ 2024],
     | 99.00th=[ 2409], 99.50th=[ 3458], 99.90th=[ 5669], 99.95th=[ 6194],
     | 99.99th=[31065]
   bw (  KiB/s): min=71520, max=95800, per=100.00%, avg=82977.03, stdev=5247.37, samples=39
   iops        : min=17880, max=23950, avg=20744.26, stdev=1311.84, samples=39
  lat (usec)   : 500=0.03%, 750=0.68%, 1000=2.34%
  lat (msec)   : 2=91.03%, 4=5.57%, 10=0.33%, 50=0.02%
  cpu          : usr=2.00%, sys=9.79%, ctx=48165, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,413472,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=80.7MiB/s (84.7MB/s), 80.7MiB/s-80.7MiB/s (84.7MB/s-84.7MB/s), io=1615MiB (1694MB), run=20002-20002msec

Disk stats (read/write):
  sda: ios=0/571664, sectors=0/3780192, merge=0/22123, ticks=0/143258, in_queue=163892, util=79.96%
And your second command, without sync:
Bash:
sudo fio --filename=/home/kevin/fio.bin --ioengine=libaio --direct=1 --rw=write --bs=128k --numjobs=1 --iodepth=32 --runtime=20 --size=10000000000 --time_based --group_reporting --name=test

test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32
fio-3.36
Starting 1 process
test: Laying out IO file (1 file / 9536MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=4550MiB/s][w=36.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=7061: Wed Jan 15 12:42:33 2025
  write: IOPS=35.9k, BW=4487MiB/s (4705MB/s)(87.7GiB/20002msec); 0 zone resets
    slat (usec): min=3, max=1261, avg= 8.97, stdev= 6.71
    clat (usec): min=149, max=30080, avg=881.60, stdev=300.98
     lat (usec): min=154, max=30086, avg=890.56, stdev=300.77
    clat percentiles (usec):
     |  1.00th=[  709],  5.00th=[  758], 10.00th=[  775], 20.00th=[  791],
     | 30.00th=[  807], 40.00th=[  816], 50.00th=[  824], 60.00th=[  832],
     | 70.00th=[  857], 80.00th=[  889], 90.00th=[ 1029], 95.00th=[ 1188],
     | 99.00th=[ 1827], 99.50th=[ 2278], 99.90th=[ 3359], 99.95th=[ 4146],
     | 99.99th=[ 5800]
   bw (  MiB/s): min= 4063, max= 4595, per=100.00%, avg=4488.92, stdev=129.75, samples=39
   iops        : min=32506, max=36764, avg=35911.36, stdev=1037.98, samples=39
  lat (usec)   : 250=0.03%, 500=0.16%, 750=3.55%, 1000=84.47%
  lat (msec)   : 2=11.03%, 4=0.70%, 10=0.05%, 50=0.01%
  cpu          : usr=12.52%, sys=36.05%, ctx=270324, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,718041,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=4487MiB/s (4705MB/s), 4487MiB/s-4487MiB/s (4705MB/s-4705MB/s), io=87.7GiB (94.1GB), run=20002-20002msec

Disk stats (read/write):
  sda: ios=0/713944, sectors=0/182746880, merge=0/246, ticks=0/600982, in_queue=600992, util=76.21%
Note, I'm running this on the same drive that the VM operating system is installed on. It is not a raw, unformatted drive or anything. Hence, I am specifying an actual file name, and I have to specify a size.

For what it is worth, when I just copy a large file using the host operating system, I get over 2GB/s:
1736970854763.png

If you'd like me to try and run any other commands or comparisons, let me know.
 
  • Like
Reactions: berkyl

berkyl

New Member
Jan 13, 2025
25
3
3
@kevin771 Thank you for providing benchmarks, you gave me hope in finding a solution for my units.

Today I found out that installing Windows on an updated BIOS 1.26 is the key. I don't know what happens during the setup but it looks like there are some flags being set resulting in sync'd write increasing 5 times and unsync'd doubling.

The values are almost near the results shown in CrystalDiskMark on Windows using Proxmox with Kernel 6.11 and I really hope those values persist.
 

berkyl

New Member
Jan 13, 2025
25
3
3
@anewsome
Regarding rados bench I now get following results without any tuning with 1M BS `rados bench -p cpool1 60 write -b 1M`:
Bash:
Total time run:         60.0119
Total writes made:      92144
Write size:             1048576
Object size:            1048576
Bandwidth (MB/sec):     1535.43
Stddev Bandwidth:       24.8844
Max bandwidth (MB/sec): 1606
Min bandwidth (MB/sec): 1488
Average IOPS:           1535
Stddev IOPS:            24.8844
Max IOPS:               1606
Min IOPS:               1488
Average Latency(s):     0.0104181
Stddev Latency(s):      0.00644595
Max latency(s):         0.0428741
Min latency(s):         0.00217387
Cleaning up (deleting benchmark objects)
Removed 92144 objects
Clean up completed and total clean up time :8.60546
As you can see I get a steady ~1.5GB/s and about tripled the IOPS without much variation.

For 4K Blocksize the IOPS are now FIFTEEN times higher `rados bench -p cpool1 60 write -b 4K`:

Bash:
Total time run:         60.0021
Total writes made:      451516
Write size:             4096
Object size:            4096
Bandwidth (MB/sec):     29.3946
Stddev Bandwidth:       0.399633
Max bandwidth (MB/sec): 30.0195
Min bandwidth (MB/sec): 28.2539
Average IOPS:           7525
Stddev IOPS:            102.306
Max IOPS:               7685
Min IOPS:               7233
Average Latency(s):     0.00212551
Stddev Latency(s):      0.000454698
Max latency(s):         0.0162819
Min latency(s):         0.00116762
Cleaning up (deleting benchmark objects)
Removed 451516 objects
Clean up completed and total clean up time :41.5871
 
  • Like
Reactions: anewsome

renewgeorgia

New Member
Oct 19, 2024
4
0
1
I am considering getting this and using it as my main Pc but also as a router.... concerned about if it can handle double duty? Any security issues doing both?

If I have it as a router and a main work PC, any suggestions on how to set up?
 

dioda

New Member
Jan 19, 2022
4
3
3
I have a similar setup, without the 25Gbe adapter. I have

  • 5x MS01
  • Same memory
  • 15x Crucial T500 2TB (3 per node)
  • Also running Proxmox
  • Also running Ceph (using 10 OSDs, 2 NVME drives per node)
  • The slowest NVME slot is used for booting each node
Ceph is configured with dedicated 10gb nic for frontend and the other 10gb nic is for Ceph backend. All in all I think it's a very similar setup to what you have. The performance I get is about right on the mark with what I'd expect from consumer drives. Note that consumer drives perform nicely until the cache is filled, and then performance falls dramatically. To avoid the falloff, you'll need enterprise NVME drives. You can see this in my rados-bench chart:

View attachment 41306
I don't recommend ceph on consumer ssd, it will quickly write to death, just the existence of 0.3DWPD write load for ssd's. Be careful with it :)
 

berkyl

New Member
Jan 13, 2025
25
3
3
I am considering getting this and using it as my main Pc but also as a router.... concerned about if it can handle double duty? Any security issues doing both?

If I have it as a router and a main work PC, any suggestions on how to set up?
It can handle this for sure, but why install both on the same system? It's not a matter of system performance to not do it; There's a high chance you'll create security holes en masse.

Why not use a cheap n100 system from Aliexpress (check out the reviews from Patrick) as a router/firewall with pfsense/opnsense and use the ms01 as work PC?
 

aliasxneo

New Member
Aug 4, 2018
9
1
3
Hi all,

Does anyone have experience using the i5 variant? My biggest concern is thermals/noise. I would like to put three into a Proxmox cluster and primarily run k8s on top of that. I don't plan to run heavy loads; maybe a small game server is the heaviest.

Everything I'm reading about thermals/noise tend to be talking about the i9. Wondering if the i5 variant fares any better in this regard?