Hi,
I am currently running Proxmox on the following hardware:
The CPU provides 40 PCIe lanes, but the motherboard has some limitations when combining things:
As my need for space is not huge, I would like to move towards NVMe drives going forward. I do not have a lot of drives, so that I could fill up 10GBe bandwidth with a big enough RAID. Moving to NVMe drives for speed/throughput (apart from size and energy efficiency) is therefore the main driver.
The initial idea is to fill up the remaining PCIe lanes with NVMe cards and drives. If I add the 10GBe card as well as keep the HBA running (for now), I should have 4 \* 4x left (either all in slot7 if I leave slot 6 empty or two and two if I use dual NVMe cards instead of one quad NVMe cards) - so 4 NMVe cards. If I get to a point where I do not need the HBA anymore, this would open up lanes for two more NVMes.
Does this idea make sense? Is there a way to utilize the HBA SAS controller with NVMe drives? Will I run into problems with passing through the NVMes to TrueNAS?
I would appreciate some thought for defining the best setup to utilize the existing hardware. I would like to avoid moving to a new motherboard or CPU if I can avoid it.
Thanks for some better ideas.
I am currently running Proxmox on the following hardware:
- Motherboard: AsRock Rack EPC612D4U (ASRock Rack > EPC612D4U)
- CPU: Intel Xeon E5-2640 v3 (Intel® Xeon® Processor E5-2640 v3 (20M Cache, 2.60 GHz) Product Specifications))
- PCIe Add-on card for NVMe drives: Supermicro AOC-SLG3-2M2 PCIe Add-On Card for up to Two NVMe SSDs. I use this card with one NVMe drive for VMs and containers so far.
- Boot Drive: 250GB SSD (I forgot the brand) connected via SATA. I was not able to boot from the NVMe for some reason.
- Software: On top of Proxmox I am running TrueNAS (passing through the onboard HBA incl. HDDs) as well as multiple VMs and containers.
The CPU provides 40 PCIe lanes, but the motherboard has some limitations when combining things:
- SLOT7: PCIe3.0 x16, auto switch to x8 when SLOT6 is occupied
- SLOT5: PCIe3.0 x16, auto switch to x8 when LSI3008 is populatedPCIe x 8
- SLOT6: PCIe3.0 x8
- The motherboard also supports bifurcation on all slots down to 4x/4x or 4x/4x/4x/4x
- One slot is filled with the mentioned NVMe add-on card and I am planning to add a dual 10GBe network card as well.
As my need for space is not huge, I would like to move towards NVMe drives going forward. I do not have a lot of drives, so that I could fill up 10GBe bandwidth with a big enough RAID. Moving to NVMe drives for speed/throughput (apart from size and energy efficiency) is therefore the main driver.
The initial idea is to fill up the remaining PCIe lanes with NVMe cards and drives. If I add the 10GBe card as well as keep the HBA running (for now), I should have 4 \* 4x left (either all in slot7 if I leave slot 6 empty or two and two if I use dual NVMe cards instead of one quad NVMe cards) - so 4 NMVe cards. If I get to a point where I do not need the HBA anymore, this would open up lanes for two more NVMes.
Does this idea make sense? Is there a way to utilize the HBA SAS controller with NVMe drives? Will I run into problems with passing through the NVMes to TrueNAS?
I would appreciate some thought for defining the best setup to utilize the existing hardware. I would like to avoid moving to a new motherboard or CPU if I can avoid it.
Thanks for some better ideas.