Minisforum MS-01 ProxmoxVE Clusters

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
Starting this thread to talk about the popular MS-01 as a Proxmox Virtual Environment node, in a clustered environment.

Some questions I have:
  1. What are the things to look out for?
  2. Should we install other software along with Proxmox on the nodes?
  3. Is your solution successful?
  4. What unique are options available, when using the MS-01 as a node?
  5. Is there a minimal or universal "best practice" approach for these nodes?
  6. What are the limitations of the different CPU options?
  7. Are you hosting a unique service?
  8. What are the useful commands to run on the nodes?
  9. Is Ceph the best option for the MS-01 cluster?
  10. What is the town you met your significant other in? What is the first concert you went to? Who was your childhood best friend? What was the name of your first school? What was the name of your first pet? What is your favorite color? Can you mitigate social engineering security risks?
 

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
Do you all create bond LACP (bond0-2.5G) and (bond1-10G) then use VLAN tagging for the different networks?
I added a dual port NIC and separated most the traffic onto its own switch.
  • vPro 2.5g port for MGMT traffic, to a management only switch, up-linked to router.
  • Other 2.5g port for Corosync traffic, to it's own stand-alone switch.
  • First 10g for Public VM traffic, direct to main LAN network switch up-linked to router.
  • Second 10g for Private VM traffic, to main LAN network switch with, isolated ports.*
  • First 100g for Public Ceph traffic, to it's own stand-alone switch.
  • Second 100g for Private Ceph traffic, to it's own stand-alone switch.
HA transfers take 4-5 seconds - I think that is 99% PVE "thinking about it" time.

I don't think what I did is necessary, for such a small cluster - but it is PromoxVE's "best practices" and it was fun!

*[I may change this to its own switch - just for more ports on the main switch]
 
  • Like
Reactions: jdpdata

jdpdata

Member
Jan 31, 2024
60
32
18
I added a dual port NIC and separated most the traffic onto its own switch.
  • vPro 2.5g port for MGMT traffic, to a management only switch, up-linked to router.
  • Other 2.5g port for Corosync traffic, to it's own stand-alone switch.
  • First 10g for Public VM traffic, direct to main LAN network switch up-linked to router.
  • Second 10g for Private VM traffic, to main LAN network switch with, isolated ports.*
  • First 100g for Public Ceph traffic, to it's own stand-alone switch.
  • Second 100g for Private Ceph traffic, to it's own stand-alone switch.
HA transfers take 4-5 seconds - I think that is 99% PVE "thinking about it" time.

I don't think what I did is necessary, for such a small cluster - but it is PromoxVE's "best practices" and it was fun!

*[I may change this to its own switch - just for more ports on the main switch]
I don't have 100G NIC or multiple separate standalone switches to do this. I do have UDM-SE, USW-Pro-MAX-24-POE and USW-Pro-Aggregation both are L3 switches. Not running anything crazy, just Plex and a few LXC/VMs. Do you think I can combine Public VM traffic and Public Ceph Traffic on same interface? Same for Private VM Traffic and Private Ceph traffic on 2nd 10G SFP+? I've read CEPH should be on it's own interface, so maybe Public/Private CEPH on 1st 10G SFP+. Then VM Public/Private on 2nd 10G SFP+? Interested in hearing your thoughts.
 

jdpdata

Member
Jan 31, 2024
60
32
18
Also, why wouldn't bonding 2x 2.5G and 2x SFP+ 10G interfaces to create bond0 and bond1 be more preferrable? Wouldn't that create a more redundant network with higher bandwidth? I can then use VLAN tagging to create separate networks.
 

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
I plan to buy 3 ms-01 to do a proxmox cluster w/ ceph on the 10g sfp+.
Are they still bugged or shipped with a lot of issues ?
I've not noticed any bugs or have any issues with the MS-01 as a PVE node.

So would you guys recommend me to buy 3 of them ?
Sure? If they fit where you want them to fit and they have the specifications that meet your needs. Only you know what your needs are, so unless you tell us more, that's all I can say!

Isn't there any heat issue ? Fans to add ?
Not really any heat issues - unless you make some. If you populate the slots they provided, with cool running selections, it's a good form factor and adequately cooled.
If you start populating the MS-01 with hot running selections and populate every open port - it's going to run hot.
I added a dual port Connectx-5 that needed a case fan, that was not installed by Minisforum in the MS-01. For that I ended up adding a fan and making a big hole in the case. I could have just added a USB fan to the top of the case blowing into the little holes that are already there too. It just depends on what you're going to end up doing with your add-ons.

Are the bugs with proxmox code actually fixed ?
I'm not familiar with any bugs, what bugs are talking about?
 

spleenftw

New Member
Apr 23, 2024
3
0
1
I've not noticed any bugs or have any issues with the MS-01 as a PVE node.
Okay that's the great first thing if there's no reboot or just error because of the cpu

Sure? If they fit where you want them to fit and they have the specifications that meet your needs. Only you know what your needs are, so unless you tell us more, that's all I can say!
I was going to rack them so yeah i can fit them wherever i want to

Not really any heat issues - unless you make some. If you populate the slots they provided, with cool running selections, it's a good form factor and adequately cooled.
If you start populating the MS-01 with hot running selections and populate every open port - it's going to run hot.
I added a dual port Connectx-5 that needed a case fan, that was not installed by Minisforum in the MS-01. For that I ended up adding a fan and making a big hole in the case. I could have just added a USB fan to the top of the case blowing into the little holes that are already there too. It just depends on what you're going to end up doing with your add-ons.
So i should still consider to add a usb fan on top of it, okay.

I'm not familiar with any bugs, what bugs are talking about?
There's some issues with the microcode and the defective CPU and some homelabers ended up returning them
 

jdpdata

Member
Jan 31, 2024
60
32
18
All three of my MS-01 has been flawless. Not a single reboot or any heat issues. I'm populating each of my 3 nodes with 3x NVMe. Slot1 is Kingston 1TB that comes with 12900H kit and slot2 and 3 with Samsung PM983a 22110 Enterprise NVMe. They do run a little warm, but the on-board fan so far has kept them at around ~52C. All 3 of mine are mounted inside my 19" rack without any overheating so far. That may change once I setup the CEPH cluster...fingers and toes crossed.

Just to add: once I received each unit I run MemTester for 24hrs. All 3 unit PASS without any issues. Then and only then I'd install Proxmox and starting configuration. Early units may have bad thermal paste. Mine so far are under control thermal wise. Didn't feel the need to re-paste CPU

PXL_20240422_183511838.jpg
 
  • Like
Reactions: dialbat

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
Also, why wouldn't bonding 2x 2.5G and 2x SFP+ 10G interfaces to create bond0 and bond1 be more preferable? Wouldn't that create a more redundant network with higher bandwidth? I can then use VLAN tagging to create separate networks.
I based mine off of what PVE suggested for each network.
  • They expressed that latency was a bigger issue than speed for the Corosync (uses very little speed, but very latency sensitive).
  • They mentioned Ceph private and public should be on their own separate networks.
    • Private Ceph should be very fast (25gbps+ for SSD based Ceph) with very low latency.
    • Public Ceph should be fast too (10gbps+) and also very low latency.
  • The Public VM network is just how the rest of your LAN will communicate with the hosted services, and it only needs to be as fast as the devices accessing those services can connect. However, it should be able to handle all of the devices connecting at once in my opinion. Latency is possibly not as big of concern here, depending on what you're hosting.
  • The Private VM network is just for VM to VM communication and it is suggested to be on it's own - to avoid bogging down the other networks.
If your switches are L3, with hardware offload and very fast, and your NICs, CPU, etc are not bottle necking the traffic, and the latency is staying low... then sure aggregate everything - heck slap down MLAG and get even more redundancy!
 
  • Like
Reactions: jdpdata

jdpdata

Member
Jan 31, 2024
60
32
18
That's the reason I got both L3 switches with hardware inter-vlan routing in planning for CEPH deployment. I think for my use case, bonding and aggregating the interfaces then use VLAN tagging will work just fine. It's just a homelab not enterprise environment.
 
  • Like
Reactions: NerdAshes

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
Okay that's the great first thing if there's no reboot or just error because of the cpu
Nope, no issues with mine. I did boot each MS-01 into Windows and update everything before, I reformatted and installed Proxmox and the U.2 drives, NIC and Coral TPU.

I was going to rack them so yeah i can fit them wherever i want to
Just make sure you're okay with the form-factor and basic limitations of it before you buy three of them. I mentioned in my review that the form-factor is something I wish was different. There are limitations of the MS-01. That said, it's pretty awesome for what it is.

So i should still consider to add a usb fan on top of it, okay.
Here is what I did for a 140mm mounted on top. I also got about the same cooling - just slapping a couple loose 80mm USB fans blowing into the little holes on the top. Others have mounted fans under it for their SSDs - I haven't felt the need too (I have a M.2 NVME and a U.2 and temps seem fine).

There's some issues with the microcode and the defective CPU and some homelabers ended up returning them
I installed the intel-microcode and have no issues to report!
I do think it's smart to get your MS-01 from Amazon. If there is a random issue with one that you get, it's far simpler to return to them, rather than Minisforum. I also use a credit card that guarantees my electronics for 3 years as a card holder benefit. I know Amazon sells Allstate insurance coverage too...
Remember - the unhappy people talk louder than the happy people. There are thousands of happy MS-01 owners that said nothing, because they have no issues to be unhappy about!
 

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
That's the reason I got both L3 switches with hardware inter-vlan routing in planning for CEPH deployment. I think for my use case, bonding and aggregating the interfaces then use VLAN tagging will work just fine. It's just a homelab not enterprise environment.
You can also use the USB4 ports with Thunderbolt-Net for the (Meshed) Ceph network. That gets you up near 12gps. There are some good write-ups on Github and the Proxmox forums for it. I did it on my last mini-PC cluster and it was okay... the latency was not where I wanted it ...so here I am with MS-01 and Connectx-5s (WAY overkill).
 
  • Like
Reactions: jdpdata

jdpdata

Member
Jan 31, 2024
60
32
18
Yup, already research TB Ring Network. Seems to work OK for people who have done it...but still a bit of an experiment at this point. I think I'll just keep it simple with bonded interfaces and VLAN tagging.
 
  • Like
Reactions: NerdAshes

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
All three of my MS-01 has been flawless. Not a single reboot or any heat issues. I'm populating each of my 3 nodes with 3x NVMe. Slot1 is Kingston 1TB that comes with 12900H kit and slot2 and 3 with Samsung PM983a 22110 Enterprise NVMe.

View attachment 36271
If you're using the Kingston as the ProxmoxVE OS drive and the Samsung PM983As as Ceph VM drives - I think you should swap them (with the Kingston to the PM983A's slot next to the WiFi card). Unless you're RAID installed OS on the PM983As? If so, not sure I'd trust that Kingston to be a solid Ceph drive.
 

jdpdata

Member
Jan 31, 2024
60
32
18
Slot1 doesn't fit 22110 NVMe drive isn't that correct? I think slot1 is only for 2280 or U.2 drives. I thought about swapping them but unfortunately with my longer 22110 enterprise drives, they can only fit slot2 & 3. The Kingston will be PVE OS drive. And the two PM983a will be CEPH OSDs. Not sure if I'll create two storage pools (fast and slow). Since Slot3 is only Gen3x2. But I think the speed of Gen3x2 is plenty fast enough for 10GbE anyways. We shall see once I get everything setup and do some performance testing.
 

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
Slot1 doesn't fit 22110 NVMe drive isn't that correct? I think slot1 is only for 2280 or U.2 drives. I thought about swapping them but unfortunately with my longer 22110 enterprise drives, they can only fit slot2 & 3. The Kingston will be PVE OS drive. And the two PM983a will be CEPH OSDs. Not sure if I'll create two storage pools (fast and slow). Since Slot3 is only Gen3x2. But I think the speed of Gen3x2 is plenty fast enough for 10GbE anyways. We shall see once I get everything setup and do some performance testing.
Looks like there is a 110 mount hole right beyond the 80... ?

I guess someone said that the 80mm post is not removable in that slot - that seems odd to me. Why have the 110 post just after it then? I wonder it they just didn't grab on with some strong pliers, to remove the 80 mount that is in the way?
 
Last edited:

jdpdata

Member
Jan 31, 2024
60
32
18
yes, the 80mm post is non-removable on slot1. I tried my damnest with pliers but couldn't remove it. Without it removed, the 22100 NVMe sit a couple mm too high hitting the fan shroud making vibrating noise. Did want to force it risking breaking something. I'm fine with Gen3x2. That still rated at 2GB/s plenty fast for 10GbE.
 

NerdAshes

Active Member
Jan 6, 2024
104
49
28
Eastside of Westside Washington
yes, the 80mm post is non-removable on slot1. I tried my damnest with pliers but couldn't remove it. Without it removed, the 22100 NVMe sit a couple mm too high hitting the fan shroud making vibrating noise. Did want to force it risking breaking something. I'm fine with Gen3x2. That still rated at 2GB/s plenty fast for 10GbE.
I'd be up in that with a grinder lol ... but you're right, Ceph will probably saturate your network without adding more speed to the drives.
 

dialbat

New Member
Feb 23, 2024
10
0
1
All three of my MS-01 has been flawless. Not a single reboot or any heat issues. I'm populating each of my 3 nodes with 3x NVMe. Slot1 is Kingston 1TB that comes with 12900H kit and slot2 and 3 with Samsung PM983a 22110 Enterprise NVMe. They do run a little warm, but the on-board fan so far has kept them at around ~52C. All 3 of mine are mounted inside my 19" rack without any overheating so far. That may change once I setup the CEPH cluster...fingers and toes crossed.

Just to add: once I received each unit I run MemTester for 24hrs. All 3 unit PASS without any issues. Then and only then I'd install Proxmox and starting configuration. Early units may have bad thermal paste. Mine so far are under control thermal wise. Didn't feel the need to re-paste CPU
Can you please share how you configures 2 Samsungs in Proxmox? zfs pool, raid?
 

Techrantula

New Member
Apr 24, 2024
3
2
3
My people!

I recognize @jdpdata from r/homelab so I think I’m in the right spot!

I ordered 3x i9-12900h with the 32Gb RAM + 1TB SSD AddOn last week. Already got the DHL shipping notification last night. Will be interesting to see if it’s all 3.

I wish I knew about the Amazon situation last week, though. Would make it a lot easier! I’d prob eat the extra cost for the 13900 if it meant getting Amazon return policy and quicker shipping.

Im a recently former VMware employee. Figured it was time to diversify a bit after 15 years in that ecosystem as a customer then an SE. Got a new gig as an SE somewhere else and getting hands on with their tech. Just needed to get some compute to go in my lab to run some virtual workloads and firewalls, run different appliance versions my customers on, etc.

I need to go ahead and order additional SSDs now.
 
  • Like
Reactions: NerdAshes