Okay. The sled is for U2 drives (drat). Guess I'm buying some NVMe drives instead and using my SSDs in a NAS box.
I have the same card but the chipset is reaching almost 100c on this card. Once the 1 nvme and one m2 usb3 card that i have on it, dissapeared from proxmox and only a restart solved it.After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.
TXB122 PCIe 3.1 x8 ASM2812 from AliExpress
Now have four 4TB storage drives and a 1TB as the boot drive.
View attachment 37167
Most cards with bifurcation are too long, but this one had a perforation on the PCB to snap it off if you didn't need to mount 22110 NVMe's. Also did the mod with covering the PCIe pins with captain tape, so all of my 64GB of RAM shows up just fine.
Ran some benchmarks and the two drives on the PCIe card run twice as fast reads compared to the two drives in the native M.2 slots (same writes).
Now off to test redundant software RAIDs.
View attachment 37168
View attachment 37169
Read my commend above, the heat must be enormous, not only for the underside NVME's but also from the chipset of the card.Has anyone tried a quad NVMe pcie adapter that had onboard bifurcation like this https://www.amazon.com/gp/aw/d/B0CCNL7YD8/ref=dp_ob_neva_mobile
I wonder since two ssd’s go on the back of the card, if they’d be super hot or even fit at all?
I have this same card too, although I got mine off Amazon. The system I originally bought it for DOES support bifurcation, but I wanted a card that would work in just about any system. In my testing, it worked well, was plenty fast and I was pleased with it. I also had to snap mine to fit it into a rack server that was tight on space, so it'll only be running 2280 drives from here on out.After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.
TXB122 PCIe 3.1 x8 ASM2812 from AliExpress
Now have four 4TB storage drives and a 1TB as the boot drive.
The card is too long, and there's not enough room under the PCIe card for barely anything. When I was testing with a Gigabyte CMT4032 (no rear NVMe's), even the tiny chipsets on the back of the PCIe card were interfering with components on the MS01 motherboard.Has anyone tried a quad NVMe pcie adapter that had onboard bifurcation like this https://www.amazon.com/gp/aw/d/B0CCNL7YD8/ref=dp_ob_neva_mobile
I wonder since two ssd’s go on the back of the card, if they’d be super hot or even fit at all?
What app gives you this CPU temperature read out?so I did it also on my MS01 (only have MX4)
idle temp is about 4-5°C less than before:
View attachment 35113
but with load there is an huge difference
it doesn't go over 79°C (before it can hit also 90°C):
View attachment 35114
After messing with a handful of dual M.2 PCIe cards, I finally got one that fits the MS-01 with bifurcation running on the card itself.
TXB122 PCIe 3.1 x8 ASM2812 from AliExpress
Now have four 4TB storage drives and a 1TB as the boot drive.
View attachment 37167
Most cards with bifurcation are too long, but this one had a perforation on the PCB to snap it off if you didn't need to mount 22110 NVMe's. Also did the mod with covering the PCIe pins with captain tape, so all of my 64GB of RAM shows up just fine.
Ran some benchmarks and the two drives on the PCIe card run twice as fast reads compared to the two drives in the native M.2 slots (same writes).
Now off to test redundant software RAIDs.
View attachment 37168
View attachment 37169
Looks like btop.What app gives you this CPU temperature read out?
I think you are right. It's funny because I have used btop before and I didn't remember having per core temps (I am guessing I usually run it in VMs and they didn't have access to the sensors). Anyway, I installed btop on proxmox and it sure does look just like it. Thanks!Looks like btop.
IOMMU group 20: | [1000:0087] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) | |||
IOMMU group 21: | [1000:0087] 05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) |
What card exactly is it? is it HPE branded or anything? maybe you need to enter the card bios if is shown in boot to alter some settings.Anyone having issues putting a 25G NIC in? Trying to use an Intel 810 I have sitting around which is 4.0 x8 but seems to be not registering for full PCIe bandwidth:
root@pve1:~# dmesg -T | grep "ice 0000"
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: The DDP package was successfully loaded: ICE OS Default Package version 1.3.36.0
[Tue Jun 4 20:59:07 2024] ice 0000:01:00.0: 15.753 Gb/s available PCIe bandwidth, limited by 16.0 GT/s PCIe x1 link at 0000:00:01.0 (capable of 126.024 Gb/s with 16.0 GT/s PCIe x8 link)
I'm also running 96G Crucial RAM, 2 Samsung 22110s RAID1, and 1 Samsung U.2 3.84TB all which has been stable for days. Just added the NIC today as preparing to build out the rest of my cluster and run CEPH across the U.2 via the 25G NIC. Wondering if anyone has suggestions or I need to revert to some ConnectX 4 Lx's
E810-XXVDA2 - I also updated to the latest Intel 810 FW which did not fix anything either. Confirmed all BIOS settings are in order with APSM disabled all the standard passthrough stuff on/off does nothing for it. Nothing odd or fancy set on the card BIOS/config settings either (when it shows up... more below)What card exactly is it? is it HPE branded or anything? maybe you need to enter the card bios if is shown in boot to alter some settings.
Also maybe in the bios the card is shown something.. have you checked everywhere?
Good to know. I've run stress test benchmarks for a long time on the NVMe's on the card, but not seen any issues. Maybe the fact these 990 Pro's are single sided makes a difference? NVMe's themselves never got over 45C. How are you seeing the temp on the card chipset?I have the same card but the chipset is reaching almost 100c on this card.
Just wanted to know that actually LSI 9206-16e DOES work.I have a LSI 9206-16E PCI-E 3.0 x8 HBA IT MODE w/ SAS2308 and it doesn't recognize the harddrives in unraid.
I can see
IOMMU group 20: [1000:0087] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) IOMMU group 21: [1000:0087] 05:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
in Unraid but it doesn't see the drives at all.
I think there is another person in this thread that also mentioned his LSI 9206 not working, but that his might be dead.
Does anyone know anything I can try? Would taping the pins help?
followingNoob here with the ms01 but have been following the thread. Currently have the 12600h variant with 64gb of ram, a 2tb U.2 gen 4x4.
I am looking for some advice on which hba card to get.
I am looking to have 6-8 Sata or Sas SSD’s in an external drive cage.
does anyone have a suggestion for which generation LSI HBA or other, I may need to run these drives that has trim support?
I have experience with spinning rust using older LSI cards like the 9211, but no experience with ssd support. It will be going in the 16x slot on my ms01. I can 3d print and or create a fan shroud if the card runs too hot.
does the motherboard use standard hole spacing for itx ?I have the same card but the chipset is reaching almost 100c on this card. Once the 1 nvme and one m2 usb3 card that i have on it, dissapeared from proxmox and only a restart solved it.
This is one of the reasons i took out the case of the MS-01 and i did install the MS-01 in a HTPC case and now 2 x 120mm noctua's throw air on it and it work fine.
Try to push the disks and see what you get...
Read my commend above, the heat must be enormous, not only for the underside NVME's but also from the chipset of the card.