Hi folks!
At work we have an older Nexenta ZFS rig serving mostly iSCSI to ESXi blades that my boss is getting worried about because we have no "support" and the drives are getting old, and the people who built it made a zpool ashift choice error (default ashift=9 rather than forcing 12) so we are now locked into 512 byte sector drives effectively.
I am seeing hints that the boss might be okay with a new build so I am trying to spec up something while stepping away from Nexenta to FreeNAS (and hopefully safely migrating the iscsi zvols to a brand new zpool as exporting the pool directly unfortunately retains the ashift setting...).
Does the following sound vaguely credible, in a parts sense, and in a "Can I badger a supermicro seller to customize this far" sense?
build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.
Supermicro | Products | SuperStorage Servers | 4U | 6048R-E1CR60L
2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)(allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)
Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).
https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf
So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?
So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS3 HBA's which each have 4 miniSAS connectors, to dual path to both backplanes. Cables galore though, and I suppose just a mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup.
Super Micro Computer, Inc. - Products | Accessories | Add-on Cards | AOC-S3216L-L16iT
This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.
The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL, and wrap that up with a Squid PCIe adapter carrier board hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot.
PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec
HDD selection will probably be 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool.
Crazy, or just entirely dependent on my reseller relationship?
PS -- Also, any tips for forklifting Nexenta zvols backing iSCSI to a new zpool on FreeNAS and configuring the iSCSI targets? I figure the easiest thing is pull all zpool HDD's from the old server and temporarily put them in the new server, import the zpool minus the old ZIL, manually copy the zvols more or less, then stringing up the new iSCSI configuration. But theory is not reality so...
At work we have an older Nexenta ZFS rig serving mostly iSCSI to ESXi blades that my boss is getting worried about because we have no "support" and the drives are getting old, and the people who built it made a zpool ashift choice error (default ashift=9 rather than forcing 12) so we are now locked into 512 byte sector drives effectively.
I am seeing hints that the boss might be okay with a new build so I am trying to spec up something while stepping away from Nexenta to FreeNAS (and hopefully safely migrating the iscsi zvols to a brand new zpool as exporting the pool directly unfortunately retains the ashift setting...).
Does the following sound vaguely credible, in a parts sense, and in a "Can I badger a supermicro seller to customize this far" sense?
build basis is a Supermicro 6048R-E1CR60L, sold normally as a complete system only, which is a top loading 60 disk chassis.
Supermicro | Products | SuperStorage Servers | 4U | 6048R-E1CR60L
2x Xeon E5-2637 v4 3.5GHz 4 core
2x 64GB LRDIMM (1 per CPU)(allow LRDIMM buildout, rather than lock in with RDIMM)
add the optional NVMe U.2 drive cage (which connects to CPU1 OCulink ports, but doesn't that mean only 4x U.2 drives?)
add the SIOM network card (thinking the quad 10G SFP+ Intel X710 based one)
Here's where things get weird. Note the system name ends with E1. Supermicro naming system implies this uses a single path backplane. Looking at their separate backplane literature however for the 30 disk backplanes shows the name BPN-SAS3-946SEL1/EL2 which implies there is a proper EL2 dual path version of the backplane (illustrations seem to confirm).
https://www.supermicro.com.tw/manuals/other/BPN-SAS3-946SEL.pdf
So one potential configuration is the typical cascading backplane setup. The illustrations implies a single HBA dual path can be done with the prebuilt system's mezzanine SAS HBA's 2 connectors only. But the system only has a single mezzanine slot for an HBA, and the HBA itself doesn't seem stackable so I don't understand how they could go about doing dual path for the prebuilt system. I guess they use 2 HBA's on other motherboards or SBB modules?
So the bright idea I had, is use the normal PCIe slots, and fit two Supermicro AOC-S3216L-L16iT SAS3 HBA's which each have 4 miniSAS connectors, to dual path to both backplanes. Cables galore though, and I suppose just a mezzanine card and a lower spec 2 connector HBA in one PCIe slot would cover the cascading setup.
Super Micro Computer, Inc. - Products | Accessories | Add-on Cards | AOC-S3216L-L16iT
This all assumes I could badger the Supermicro reseller to fit 2 BPN-SAS3-946SEL2 backplanes into the system, otherwise no dual path options.
The other weird part of the setup is to use the NVMe U.2 drive cage to host some Optane drives for the ZIL, and wrap that up with a Squid PCIe adapter carrier board hosting up to 4 M.2 drives for L2ARC using the last remaining conventional PCIe slot.
PCI Express Gen 3 Carrier Board for 4 M.2 SSD modules - Amfeltec
HDD selection will probably be 4TB SAS to increase drive count and cut down on cost. The chassis has an additional 2x2.5 SATA drive holders, which I would like to use for the OS ZFS syspool.
Crazy, or just entirely dependent on my reseller relationship?
PS -- Also, any tips for forklifting Nexenta zvols backing iSCSI to a new zpool on FreeNAS and configuring the iSCSI targets? I figure the easiest thing is pull all zpool HDD's from the old server and temporarily put them in the new server, import the zpool minus the old ZIL, manually copy the zvols more or less, then stringing up the new iSCSI configuration. But theory is not reality so...