Creative 2,5" solution for Supermicro CSE829U

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

lte

Member
Apr 13, 2020
94
40
18
DE
Hi guys,
before I start getting completely off-track, i wanted to ask you for advice regarding the placement of SFF disks in my Supermicro 6028U-TR4+ Server, based on the CSE829U chassis.
My current plan is to have the 12 LFF front ports conencted to an HBA, which is then passed to a VM for storage. Plan is that on this host the backup VM's and some VDI stuff will reside. VDI workload is far from complex, potentially just MS Office for max 10 users, given the chassis, I considered to install a 2-slot GPU for those tasks. Further, a small video streaming (Plex or alternative) Vm will reside on it as well, either tapping one large GPU or getting a discrete second one. The data to be streamed resides on a vSAN with 40GbE connection.

Why am I telling you all of this?
I'm currently facing more or less two issues:
1) The storage VM is based on ZFS, thus read and write cache are nice. Given the incremental nature of most VM backups, a 2Tb write cache is likely to significantly speedup the backup process (I'm using Veeam btw) - While I could start sacrificing the PCIe slots for M.2 NVMe or as physical attachment points for internal SFF mounts, this is not my preferred solution given the amount of peripheral devices required (2x SAS Controller, FC, 40GbE). Further, I'm not a really big fan of M.2 sticks, as the PLP capable ones are ridiculously expensive compared to U.2 or SAS3.

2) The ESXi Datastore: While I could theoretically use the vSAN as ESXi datastore, this comes with the risk of not being able to boot the backup server in case the vSAN fails. Therefore not a good idea I think. Further, I plan on using a RAID1 or 10 for the ESXi datastore, and the M.2 cards with RAID functionality are just out of my WTP.


TL;DR, I think, I should at least need 3 SFF (1x ZFS, 2x ESXi) slots for the machine. Following the ideas Supermicro and HPE have shown recently, namely the internal bays in the middle of the chassis, I considered 3D-printing some kind of drive mounts to install more or less above the DIMM banks. Anyone of you got experiences with stuff like this?

The other alternative would be to add a 1U 8x SFF Chassis (e.g. SC113M) on top of the server and use it as JBOD enclosure (I have the reqired Supermicro PCB for the PSU's to work). That way I could tunnel 4 ports to the storage VM and use the other 4 ports as ESXi datastore.

Completely different option for me would be to sell the 6028U, get an ATX motherboard and put it all in a large 836 chassis, where I could mount the SSDs pretty much where I want. this however is my least desired option.


Thanks for making it through this mammoth post, I'm more than interesed to hear your opinions.


[I'm aware of the 2x SFF rear kit by Supermicro, but this would kill the 2nd GPU slot]
 
Last edited:

mbosma

Member
Dec 4, 2018
76
58
18
u.2 to pci-e adapters exist, just in case you didn't know about these:
https://www.amazon.com/StarTech-com-U-2-PCIe-Adapter-PEX4SFF8639/dp/B072JK2XLC
I know that won't solve your problem of losing pci-e slots but at least you won't have to use m.2 ssd's.
In Europe I can get m.2 Samsung pm963 & pm983 960gb for around €90-100, that's not that expensive IMO.

The JBOD enclosure would require two extra addon cards is you want to passthrough using pci-e passthrough.

Mounting the ssd's internally will probably work with some 3d printing.
Have you checked thingiverse.com?
I do have some experience with 3d modelling but don't have a 3d printer or a CSE829U to model it.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Have not understood the whole goal (especially not sure re 2U GPUs being usable for acceleration and plex, there are very few, eg P600/620, let alone 2U high and 2 slots wide), but there are also M2 to U2 adapter, 13 bucks from Intel for example or included in some of the optane U2 drives)

Also not sure re a 2TB write cache for VMs, what a re you envisioning there? NVME drive?
The only write cache that FreeNas has is memory, the slog is not a write cache.
L2Arc might help o/c but thats read only.

What backplane does your box have?
 

lte

Member
Apr 13, 2020
94
40
18
DE
Hi guys,
Thanks for making it through my text.

u.2 to pci-e adapters exist, just in case you didn't know about these:
https://www.amazon.com/StarTech-com-U-2-PCIe-Adapter-PEX4SFF8639/dp/B072JK2XLC
I know that won't solve your problem of losing pci-e slots but at least you won't have to use m.2 ssd's.
In Europe I can get m.2 Samsung pm963 & pm983 960gb for around €90-100, that's not that expensive IMO.
Yeah, I'm aware of these and got a couple of them working in an Intel Server w/ P3700 - works nice as long as your MoBo supports bifurcation.
Thing is, the PM model's have not such a great endurance, and most importantly, while a NVMe U.2 would work for ZFS, I'd still require some kind of mirroring for the ESXi storage. Lastly, I have my namesake issue left, namely where to place these SFF disks in the chassis.

Have you checked thingiverse.com?
Will have a look, thanks!



Have not understood the whole goal (especially not sure re 2U GPUs being usable for acceleration and plex, there are very few, eg P600/620, let alone 2U high and 2 slots wide), but there are also M2 to U2 adapter, 13 bucks from Intel for example or included in some of the optane U2 drives)
Was thinking about a GRID, due to the "divisibility" to multiple VMs / VDI machines.

Also not sure re a 2TB write cache for VMs, what a re you envisioning there? NVME drive?
The only write cache that FreeNas has is memory, the slog is not a write cache.
L2Arc might help o/c but thats read only.
Was thinking of a ZIL here.
Based on the past I/O traffic using a classic SAN, 1GB/s is pretty much the max concurrent writes the backup machine will pull, thus a 12G SAS should be sufficient for the purpose I think.

I have the 12G SAS Backplane w/o extender, and connected the 3 SFF8643 to an Adaptec 71605.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Not sure there is a 2U Grid card? At least the older ones (K1/K2) were 3U. Maybe newer ones are smaller but incur license fees...
For non graphics works a fairly fast CPU (2.4GHz+) should suffice for VDI (my experience).

How do you envision to share the vm pool? NFS or iSCSI? synced or async?
If you come from this (https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/) which is slighty inaccurate in itself, we are talking about the same thing - what you call ZIL is what I call SLOG which is not a write cache.

Data is cached in memory, a (fairly small) copy of it can be concurrently written to the slog. You won't need more then 100G slog since it won't be used.

1GB/s might work with enough parallelism due to many users, but will be closer to half for less users (sync writes).
A good SAS3 SLOG might be HGST 530 or 300 SSD series (https://napp-it.org/doc/downloads/napp-it_build_examples.pdf)

If you're not running an extender, and are not planning on nvme, why not move 4 of the 3.5 drives to the other HBA with a raid controller (3.5. to 2.5 adapter drive sled)? Or are you in need of the 12 drive bays?
 

lte

Member
Apr 13, 2020
94
40
18
DE
Not sure there is a 2U Grid card? At least the older ones (K1/K2) were 3U.
Pretty sure they are all 2 slots high.
1591965367200.png


How do you envision to share the vm pool? NFS or iSCSI? synced or async?
Aysnc directly to Veeam as NFS share

How do you envision to share the vm pool? NFS or iSCSI? synced or async?
If you come from this (https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/) which is slighty inaccurate in itself, we are talking about the same thing - what you call ZIL is what I call SLOG which is not a write cache.
Sorry my bad, the ZIL is obviously stored on the SLOG

You're correct on the write cache, I'm currently using a Radiz3, should I alternatively move to a mdadm RAID6 w/ HS and use the intended SLOG/ZIL device as write cache there? From my experience, this is possible with mdadm.


A good SAS3 SLOG might be HGST 530 or 300 SSD series (https://napp-it.org/doc/downloads/napp-it_build_examples.pdf)
Got a couple of Ultrastar 1600, I think they should be up for the job. (While the backups are write-heavy, they only occur once a day, total backup size of the vSAN is approx ~40TB)

If you're not running an extender, and are not planning on nvme, why not move 4 of the 3.5 drives to the other HBA with a raid controller (3.5. to 2.5 adapter drive sled)? Or are you in need of the 12 drive bays?
These are planned to be committed to the storage target for Veeam, 12 Disk RAIDz3, thus the zpool being extremely unlikely to fail. The backup machine itself write the backup to tape following a D2D2T strategy, thus my asking for a "creative" solution
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
2U High and 2 slots wide are different things;)

But I see you have an UIO box with a riser card, so you can use 3U cards - and worry more about the number of slots they take - my bad have not checked your chassis in detail.

Ok, how about sth like this?
Intel has air ducts that have space for ssds on top - just would need to use 1U heatsinks and maybe fit to size
There should be variations of that one, it was just the quickest to find
 

lte

Member
Apr 13, 2020
94
40
18
DE
2U High and 2 slots wide are different things;)

I thought there is with PCIe card only Full Height and Half Height (the braket), then the number of slot the card occupies, and the depth of the card - half length (90%) and full length (GPUs mostly and really weird stuff (like AJA capture cards))

Guess I'm learning new stuff every day :D

Yeah I was also thinkingg of something like this 3D printed, thus my question:
Is it easier to get this stuff 3D printed out of some kind heat resistant stuff (not a thermoplast, like most 3D printers use) or just go the lazy way using an external chassis.
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
[I'm aware of the 2x SFF rear kit by Supermicro, but this would kill the 2nd GPU slot]
You can fit 4 dual-slot GPUs in the chassis. If you only need two, you won't be losing out on anything with the rear-drive kit.
 

lte

Member
Apr 13, 2020
94
40
18
DE
You can fit 4 dual-slot GPUs in the chassis. If you only need two, you won't be losing out on anything with the rear-drive kit.
Yeah i know, but this will lead to no remaining PCIe Slots.
1591988850682.png(The top center x8 port is in my case a x16, as I have a different riser installed)
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
Which riser are you using and what other expansion cards do you need aside from the GPUs?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I thought there is with PCIe card only Full Height and Half Height (the braket), then the number of slot the card occupies, and the depth of the card - half length (90%) and full length (GPUs mostly and really weird stuff (like AJA capture cards))
No thats correct
2U [Rack Units] is the bracket 'size' which depends on the total height of the server in use (1u - only horizontal cards w riser, 2u horizonatal or vertical depending on Risers or not, 3+U usually only vertical cards). The slots is the width in a regular PC or vertical positioning and the depth stays the same whatever :)

I guess it was me confusing things due to horizontal/vertical mixup.



...and what other expansion cards do you need aside from the GPUs?
this is not my preferred solution given the amount of peripheral devices required (2x SAS Controller, FC, 40GbE).

I think the first thing I'd do is to evaluate whether you really need 2 GPUs and which ones.
1. Plex will be happy with anything being able to transcode, maybe something like a P620 will suffice, or a P2000, both are 1 slot wide GPUs potentially leaving you some space under them
2. For light office works you don't absolutely need a GPU - depending on CPU this might be perfectly fine. As I mentioned I found 2.4 GHz to be fine for day to day usage (VMWare Horizon based, Office, light Video, Webbrowsing). Also older Grid Cards (K1/K2) which are free wrt licensing are not supported on ESX 6.7 and newer any more. If you do have licenses then o/c this is no issue.

Based on that you can significantly change your build - or not;)
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
1x 40GbE, 1x SAS HBA Storage VM, 1x IO Controller ESXi Datastore (either 12G SAS or NVMe)
You could easily make that work with the right riser. 2.5" bays on the right side. GPU in the middle double-height slot. 40GbE in the middle LP slot. Second GPU goes on the left side and that leaves the two free slots for the HBAs.