Hi folks,
I built a TrueNAS machine around a BKHD N510X motherboard a couple of months ago. It performs well and it's got a good list of features (I got the base-model Celeron N5100 option). I spec'd it with 16GB of DDR4-2666, an Intel 600p 256GB NVMe boot SSD and 6x WD Enterprise 6TB SATA drives in a RAID-Z1 (intentional as the system is backed up). I've also fitted an X520 10Gb card and the system is capable of using it - I'll get 3.5Gbps steady during a ZFS send, touching 5Gbps occasionally.
The main purpose I built it for is to serve as an iSCSI LUN (a zvol) shared between my 4 Proxmox hypervisors (via the 10Gb card to a dumb switch then 2.5Gb NICs on each hypervisor). Whilst it works okay in this use case, I'm finding the spinning disks are quite a limiting factor - they're laggy and backups are very slow (3 hours per node). I have 4 spare 512GB SATA SSDs and I got thinking, what if I created a new zpool out of them and used them for Proxmox. Trouble is that the motherboard only has 6 SATA ports and I'm using them all. Well, the board does have an m.2 NVMe slot and I have no need for NVMe speeds from the boot volume.
I bought a JMicron JMB585 m.2->5x SATA adapter (before learning they have a poor reputation) but when fitting it, it's not detected at all. Linux does not show it under lspci. I then bought an ASMedia ASM1166 m.2->6x SATA, since the board already has an ASM1166 controller and it's stable. However, still no joy, but this one does at least light up the port LEDs on power on.
Since the 600p is NVMe and is detected and bootable, I'm not sure why the SATA card isn't being detected at all. Not bootable, I'd understand, but it's just not appearing, full stop, even with a SATA SSD plugged into the card.
The BIOS on the motherboard is up to date. There's a section under Chipset -> PCIe that toggles the various PCIe devices, and it declares "PCIe x1 to SSD", though the only controls are 'Enabled' or 'Disabled.' I've tried disabling one of the 2.5Gb NICs thinking it might free up PCIe lanes, but no such joy.
I'd like not to lose the 10Gb card as it's useful, and the slot is only PCIe 2.0 x2, though the NVMe slot is PCIe 3.0 x1 so not a whole lot of difference. The SATA cards both claim to be PCIe 3.0 x2; in theory they should both work fine in an x1 port, right?
Any idea what determines whether or not an m.2 slot is usable for other PCIe devices?
I built a TrueNAS machine around a BKHD N510X motherboard a couple of months ago. It performs well and it's got a good list of features (I got the base-model Celeron N5100 option). I spec'd it with 16GB of DDR4-2666, an Intel 600p 256GB NVMe boot SSD and 6x WD Enterprise 6TB SATA drives in a RAID-Z1 (intentional as the system is backed up). I've also fitted an X520 10Gb card and the system is capable of using it - I'll get 3.5Gbps steady during a ZFS send, touching 5Gbps occasionally.
The main purpose I built it for is to serve as an iSCSI LUN (a zvol) shared between my 4 Proxmox hypervisors (via the 10Gb card to a dumb switch then 2.5Gb NICs on each hypervisor). Whilst it works okay in this use case, I'm finding the spinning disks are quite a limiting factor - they're laggy and backups are very slow (3 hours per node). I have 4 spare 512GB SATA SSDs and I got thinking, what if I created a new zpool out of them and used them for Proxmox. Trouble is that the motherboard only has 6 SATA ports and I'm using them all. Well, the board does have an m.2 NVMe slot and I have no need for NVMe speeds from the boot volume.
I bought a JMicron JMB585 m.2->5x SATA adapter (before learning they have a poor reputation) but when fitting it, it's not detected at all. Linux does not show it under lspci. I then bought an ASMedia ASM1166 m.2->6x SATA, since the board already has an ASM1166 controller and it's stable. However, still no joy, but this one does at least light up the port LEDs on power on.
Since the 600p is NVMe and is detected and bootable, I'm not sure why the SATA card isn't being detected at all. Not bootable, I'd understand, but it's just not appearing, full stop, even with a SATA SSD plugged into the card.
The BIOS on the motherboard is up to date. There's a section under Chipset -> PCIe that toggles the various PCIe devices, and it declares "PCIe x1 to SSD", though the only controls are 'Enabled' or 'Disabled.' I've tried disabling one of the 2.5Gb NICs thinking it might free up PCIe lanes, but no such joy.
I'd like not to lose the 10Gb card as it's useful, and the slot is only PCIe 2.0 x2, though the NVMe slot is PCIe 3.0 x1 so not a whole lot of difference. The SATA cards both claim to be PCIe 3.0 x2; in theory they should both work fine in an x1 port, right?
Any idea what determines whether or not an m.2 slot is usable for other PCIe devices?