I have the wonderful problem of some A2SDi-4C-HLN4F boards without 10Gbit networking in SC505-203B chassis, spare Intel X520-DA2 cards, and spare 2TB SSDs.
In my mind there's potential for a nice, low power Proxmox host or general purpose low power workhorse for point in time projects.
The SC505-203B can fit a HHHL PCIe card with the both spaces for Supermicro 2x 2.5" drive cages in use, but it would be fun to be able to cram 6x SSDs in for 10TB of raidz1.
Silverstone make a drive cage for 3.5" slots that can house 3x 2.5" SSDs (SilverStone SDP08-E INTRODUCTION) but it has some flanges that make it take up the full dimensions of a 3.5" drive, which won't allow use of a HHHL PCIe card.
Apart from buying a 3D printer, designing, and printing a solution, or selling all the hardware to buy something with 10GbE onboard, has anyone else got some ideas on how this could all fit together in a non-rubberband+superglue form?
In my mind there's potential for a nice, low power Proxmox host or general purpose low power workhorse for point in time projects.
The SC505-203B can fit a HHHL PCIe card with the both spaces for Supermicro 2x 2.5" drive cages in use, but it would be fun to be able to cram 6x SSDs in for 10TB of raidz1.
Silverstone make a drive cage for 3.5" slots that can house 3x 2.5" SSDs (SilverStone SDP08-E INTRODUCTION) but it has some flanges that make it take up the full dimensions of a 3.5" drive, which won't allow use of a HHHL PCIe card.
Apart from buying a 3D printer, designing, and printing a solution, or selling all the hardware to buy something with 10GbE onboard, has anyone else got some ideas on how this could all fit together in a non-rubberband+superglue form?