I have 5x MS01 in a Proxmox cluster. Running about 40 "permanent" always ON VMs. 10 always on LXC containers. Storage is 15x 2TB Crucial T500 (3 per MS01). Ceph is running public and backend networking on bond1, which is 2x 10gb on each node. No transceivers, using direct cables from each MS01 to the 10g switch. Bond0 is 2x 2.5g on each MS01. Both bond0 and bond1are LACP.
Of the 40 VMs running, really just a mix of everything I need. DHCP, DNS, active directory, a Kubernetes worker on each node, Oracle databases, nested ESXi and vCenter, Proxmox Backup Server, pfSense, Plex, iVentoy (PXE boot version of Ventoy), AzureDevops (git repository), MeshCommander, MinIO, Kasm, a lot of stuff. Kubernetes hosts a bunch of databases, media manager, Guacamole, FreshRSS and probably 50 other deployments in total.
I have them mounted vertically in 6U of a super shallow depth rack - 12 inches deep. The front of the rack has 6U of fans pushing air directly into the front of the MS01s. To get most air through them, I designed and 3D printed a custom carrier for the MS01. The MS01 slide into place and lock with a latch at the back of the rack.
The cluster also has 5x i7 Intel NUCs, not because I designed it that way, but because I happen to have them already. Currently the GPUs are Thunderbolt attached to the i7 NUCs and using PCI passthrough for the VMs running Ollama, Open-Webui, Stable Diffusion, ComfyUI, Blender rendering and PiperTTS voice training - mostly.
The rack will get a lot cleaner once I design the mounts the NUCs, or just get rid of them and finish the rack with more MS01s. The single 1gb NIC on the NUCs really limits their usefulness. I also need to design/make the mounts for that growing collection of power bricks under the switch. They. aren't getting any cooling currently but the MS01 and NUCs are icy cold in normal operation.
View attachment 36782View attachment 36783