Hi guys,
before I start getting completely off-track, i wanted to ask you for advice regarding the placement of SFF disks in my Supermicro 6028U-TR4+ Server, based on the CSE829U chassis.
My current plan is to have the 12 LFF front ports conencted to an HBA, which is then passed to a VM for storage. Plan is that on this host the backup VM's and some VDI stuff will reside. VDI workload is far from complex, potentially just MS Office for max 10 users, given the chassis, I considered to install a 2-slot GPU for those tasks. Further, a small video streaming (Plex or alternative) Vm will reside on it as well, either tapping one large GPU or getting a discrete second one. The data to be streamed resides on a vSAN with 40GbE connection.
Why am I telling you all of this?
I'm currently facing more or less two issues:
1) The storage VM is based on ZFS, thus read and write cache are nice. Given the incremental nature of most VM backups, a 2Tb write cache is likely to significantly speedup the backup process (I'm using Veeam btw) - While I could start sacrificing the PCIe slots for M.2 NVMe or as physical attachment points for internal SFF mounts, this is not my preferred solution given the amount of peripheral devices required (2x SAS Controller, FC, 40GbE). Further, I'm not a really big fan of M.2 sticks, as the PLP capable ones are ridiculously expensive compared to U.2 or SAS3.
2) The ESXi Datastore: While I could theoretically use the vSAN as ESXi datastore, this comes with the risk of not being able to boot the backup server in case the vSAN fails. Therefore not a good idea I think. Further, I plan on using a RAID1 or 10 for the ESXi datastore, and the M.2 cards with RAID functionality are just out of my WTP.
TL;DR, I think, I should at least need 3 SFF (1x ZFS, 2x ESXi) slots for the machine. Following the ideas Supermicro and HPE have shown recently, namely the internal bays in the middle of the chassis, I considered 3D-printing some kind of drive mounts to install more or less above the DIMM banks. Anyone of you got experiences with stuff like this?
The other alternative would be to add a 1U 8x SFF Chassis (e.g. SC113M) on top of the server and use it as JBOD enclosure (I have the reqired Supermicro PCB for the PSU's to work). That way I could tunnel 4 ports to the storage VM and use the other 4 ports as ESXi datastore.
Completely different option for me would be to sell the 6028U, get an ATX motherboard and put it all in a large 836 chassis, where I could mount the SSDs pretty much where I want. this however is my least desired option.
Thanks for making it through this mammoth post, I'm more than interesed to hear your opinions.
[I'm aware of the 2x SFF rear kit by Supermicro, but this would kill the 2nd GPU slot]
before I start getting completely off-track, i wanted to ask you for advice regarding the placement of SFF disks in my Supermicro 6028U-TR4+ Server, based on the CSE829U chassis.
My current plan is to have the 12 LFF front ports conencted to an HBA, which is then passed to a VM for storage. Plan is that on this host the backup VM's and some VDI stuff will reside. VDI workload is far from complex, potentially just MS Office for max 10 users, given the chassis, I considered to install a 2-slot GPU for those tasks. Further, a small video streaming (Plex or alternative) Vm will reside on it as well, either tapping one large GPU or getting a discrete second one. The data to be streamed resides on a vSAN with 40GbE connection.
Why am I telling you all of this?
I'm currently facing more or less two issues:
1) The storage VM is based on ZFS, thus read and write cache are nice. Given the incremental nature of most VM backups, a 2Tb write cache is likely to significantly speedup the backup process (I'm using Veeam btw) - While I could start sacrificing the PCIe slots for M.2 NVMe or as physical attachment points for internal SFF mounts, this is not my preferred solution given the amount of peripheral devices required (2x SAS Controller, FC, 40GbE). Further, I'm not a really big fan of M.2 sticks, as the PLP capable ones are ridiculously expensive compared to U.2 or SAS3.
2) The ESXi Datastore: While I could theoretically use the vSAN as ESXi datastore, this comes with the risk of not being able to boot the backup server in case the vSAN fails. Therefore not a good idea I think. Further, I plan on using a RAID1 or 10 for the ESXi datastore, and the M.2 cards with RAID functionality are just out of my WTP.
TL;DR, I think, I should at least need 3 SFF (1x ZFS, 2x ESXi) slots for the machine. Following the ideas Supermicro and HPE have shown recently, namely the internal bays in the middle of the chassis, I considered 3D-printing some kind of drive mounts to install more or less above the DIMM banks. Anyone of you got experiences with stuff like this?
The other alternative would be to add a 1U 8x SFF Chassis (e.g. SC113M) on top of the server and use it as JBOD enclosure (I have the reqired Supermicro PCB for the PSU's to work). That way I could tunnel 4 ports to the storage VM and use the other 4 ports as ESXi datastore.
Completely different option for me would be to sell the 6028U, get an ATX motherboard and put it all in a large 836 chassis, where I could mount the SSDs pretty much where I want. this however is my least desired option.
Thanks for making it through this mammoth post, I'm more than interesed to hear your opinions.
[I'm aware of the 2x SFF rear kit by Supermicro, but this would kill the 2nd GPU slot]
Last edited: