I'm totally new to looking at enterprise 22110 length drives. My Gigabyte X570S Aero G has four m.2 slots that can accept that length. I'm running zVault on it (the community fork/continuation of TrueNAS Core). I plan to populate them all, basically two smaller ones (480GB probably) for a RAIDZ1 boot volume for the OS, and two bigger ones (1.92TB) in a RAIDZ1 mirror for stuff like bhyve VMs and database storage. Upon populating, I'll lose SATA4/SATA5 and the bottom PCIe slot which is all fine.
Do these drives drives need thermal pad / heatsinks? And I assume I would want to check if it the drive is double sided and get a double sided one if necessary? This is in a desktop ATX with three front intake fans / 1 rear exhaust. Airflow would be somewhat present but nowhere near what a proper server has.
And then lastly I'm not sure on the ideal place to put the drives. M2A_CPU has a direct link to the CPU, and the other 3 are all going through the X570 chipset. The boot mirror will see the lowest activity and I don't care about bottlenecks but I want the two larger drives to have more priority access and try to limit the bottleneck. I'm thinking keep the two boot drives on the chipset, then the larger data drives with one the CPU and other one on the chipset. Page 5 on the manual shows the block diagram. Or maybe it makes sense to run both larger drives off the chipset for more "equal" sharing? But then I think it could be slower.
Do these drives drives need thermal pad / heatsinks? And I assume I would want to check if it the drive is double sided and get a double sided one if necessary? This is in a desktop ATX with three front intake fans / 1 rear exhaust. Airflow would be somewhat present but nowhere near what a proper server has.
And then lastly I'm not sure on the ideal place to put the drives. M2A_CPU has a direct link to the CPU, and the other 3 are all going through the X570 chipset. The boot mirror will see the lowest activity and I don't care about bottlenecks but I want the two larger drives to have more priority access and try to limit the bottleneck. I'm thinking keep the two boot drives on the chipset, then the larger data drives with one the CPU and other one on the chipset. Page 5 on the manual shows the block diagram. Or maybe it makes sense to run both larger drives off the chipset for more "equal" sharing? But then I think it could be slower.