EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

EugenPrusi

New Member
Nov 6, 2023
2
0
1
If you want to know more about ZFS and how it works meaning nuts and bolts Ars Technica wrote an article about three years ago that was quite good. I will link it here. Understanding ZFS emedia.am That is a Big Gulp of information but to keep it simple for starters is you can simply use TrueNAS Scale if you also want to run containers and virtual machines aswell. Basically appliance software on your NAS. Far as getting started you can purchase something from IX systems or build something of your own. One of the most simple ways to me is just purchasing a older Dell Poweredge. Something like the R730 or R730XD with V4 processors like 2650 V4 which are 90watt 10 cores and a HBA330 controller inside of it. You can keep it simple and just use a old machine of yours and put some drives in it. You can use something like labgopher to find older Poweredge deals. It gathers all the ebay data and presents it in a way you can filter out what you want. Just some ideas. I personally just use a Poweredge. keeps it simple.
Hi, you are right. Thanks for the recommendation! I will definitely read the article about ZFS on Ars Technica. I'll also look into using TrueNAS Scale for the setup. Thanks for the tips!
 

AlistairM

New Member
Nov 2, 2023
9
0
1
SATA vs SAS - Other people on here have mentioned that SATA drives can be used, provided they are plugged into the 'bottom controller'. What is the 'bottom controller' please? In my head, I'm imagining that this means that you must use the bottom two of the four ports when connecting the JBOD.
jbod.png
 

bonox

Member
Feb 23, 2021
87
20
8
Background: There's an A controller and a B controller. I think they're labelled as such by arrows on the left side of the picture you posted in the middle of the box.

Each controller is connected to one of the two signal paths to a SAS HDD. SATA has only one path, so will be connected only to one controller, or will have an interposer capable of making two paths appear as one to the disk. There are separate part numbers for SAS and SATA interposers, so there may be a point of difference there specifically for SATA disks. Why are there two SAS ports per controller card? One's an uplink to the HBA, the other a downlink to another downstream 'daisy chained' shelf.

TLDR; if the drive doesn't show up, connect to the other controller card instead.
 
  • Like
Reactions: AlistairM

BrassFox

New Member
Apr 23, 2023
19
5
3
If you want to know more about ZFS and how it works meaning nuts and bolts Ars Technica wrote an article about three years ago that was quite good. I will link it here. Understanding ZFS storage. That is a Big Gulp of information but to keep it simple for starters is you can simply use TrueNAS Scale if you also want to run containers and virtual machines aswell. Basically appliance software on your NAS. Far as getting started you can purchase something from IX systems or build something of your own. One of the most simple ways to me is just purchasing a older Dell Poweredge. Something like the R730 or R730XD with V4 processors like 2650 V4 which are 90watt 10 cores and a HBA330 controller inside of it. You can keep it simple and just use a old machine of yours and put some drives in it. You can use something like labgopher to find older Poweredge deals. It gathers all the ebay data and presents it in a way you can filter out what you want. Just some ideas. I personally just use a Poweredge. keeps it simple.
This is very helpful. Thank you.
 
  • Like
Reactions: Fiberton

Towerjockey

New Member
Feb 5, 2024
4
0
1
New to this forum, I have read these 10 pages 4 to 5 times now. Looking for definitive advice for deploying 3 of the Dell EMC2 KTN STL3 (EM30) Disk Shelves. I have a Dell PowerEdge R730XD Server running TrueNAS Scale with an LSI 9201-16e which has 4 external 4x ports on a full height card flashed to IT mode, I am populating my arrays with 6TB SAS 7.2K HDDs. They came in the caddies 5 drives to a box from Dell EMC with the part number 303-115-003D on the interposer board. The HDDs themselves are Seagate Enterprise Capacity 3.5 HDD v4 6000GB model ST60000NM0014. Everything except the Dell server is brand spanking new, new arrays, new rack rails, new bezels, new drives. I’m looking for the best way to cable these to my server. My case use is primarily for a Plex Server.

My Server is a Dell PowerEdge R730xd with 12 x 3.5 in. front bays loaded with 12TB SATA 7.2K Drives, 2 x 2.5 in. rear flex bays containing 2 x 1TB SSD in Raid 0 for OS (TrueNAS Scale), 2 x Intel Xeon 2695 v4 CPU’s, 384Gb RAM, Nvidia Quadro P2000 5Gb GPU, LSI 9201-16e 6Gbps 16-lane external HBA P20 IT Mode SAS Controller, Supermicro AOC-SLG3-2M2 PCIe Add-On Card with Two 1Tb NVMe SSDs (in raid 0 mirror for apps) has the Dell H730p mini Raid controller, daughter card with 2 x 1GB RJ45 Ethernet and 2 x 10Gb Ethernet and and a Qualcomm 2 x Channel fiber controller.

Thank you in advance for any and all constructive advice. I am very grateful to those of you who are willing to take the time to spend some of your valuable time to share your knowledge. It is very much appreciated and I too will participate in paying it forward by sharing with someone who asks.
 
Last edited:

BrassFox

New Member
Apr 23, 2023
19
5
3
New to this forum, I have read these 10 pages 4 to 5 times now. Looking for definitive advice for deploying 3 of the Dell EMC2 KTN STL3 (EM30) Disk Shelves. I have a Dell PowerEdge R730XD Server running TrueNAS Scale with an LSI 9201-16e which has 4 external 4x ports on a full height card flashed to IT mode, I am populating my arrays with 6TB SAS 7.2K HDDs. Everything except the Dell server is brand spanking new, new arrays, new rack rails, new bezels, new drives. I’m looking for the best way to cable these to my server. My case use is primarily for a Plex Server.

My Server is a Dell PowerEdge R730xd with 12 x 3.5 in. front bays loaded with 12TB SATA 7.2K Drives, 2 x 2.5 in. rear flex bays containing 2 x 1TB SSD in Raid 0 for OS (TrueNAS Scale), 2 x Intel Xeon 2695 v4 CPU’s, 384Gb RAM, Nvidia Quadro P2000 5Gb GPU, LSI 9201-16e 6Gbps 16-lane external HBA P20 IT Mode SAS Controller, Supermicro AOC-SLG3-2M2 PCIe Add-On Card with Two 1Tb NVMe SSDs (in raid 0 mirror for apps) has the Dell H730p mini Raid controller, daughter card with 2 x 1GB RJ45 Ethernet and 2 x 10Gb Ethernet and and a Qualcomm 2 x Channel fiber controller.

Thank you in advance for any and all constructive advice. I am very grateful to those of you who are willing to take the time to spend some of your valuable time to share your knowledge. It is very much appreciated and I too will participate in paying it forward by sharing with someone who asks.
Other than being a painfully long list of all your accumulated hardware for some reason, I’m not sure what you are asking. Is it what cables to use?

8088 ends to the shelves, other end to whatever it is that your HBA wants. Probably 8644 ends.

You have SAS drives so you probably want two cables per shelf to fully utilize those, meaning that a quad cable HBA card can talk to only two shelves directly, using twin links. Three shelves presents some choices. To run three you’ll need to either:

1) daisy chain shelf to shelf (8088 to 8088) to get the third shelf but you will lose some bandwidth on the chained shelves. 4 x 8088-8644 plus 2 x 8088-8088 daisy chain cables. This is probably the best way, but it compromises performance a bit. Only if /when you are accessing all 30 drives on the daisy-chained shelves (which will probably never happen).

2) Try to live with one cable per shelf on two shelves, and thus the SAS drives on the single cable shelves become single link (like SATA): 4 x 8088-8644, one cable per shelf for two shelves, twin links on one. But this may throw error events that don’t really mean much.

3) Live with single links to all three: 3 x 8088-8644. Might still throw meaningless hardware errors, but also all three drive shelves are in harmony; all have the same connections.

4) direct link one and daisy chain two: 2x 8644-8088, 4x 8088-8088. No errors, but more reduced bandwidth. This may actually prove to the best way though, would need to test them to know, but all have twin links and all are daisy-chained too. Consistency. Use every other port on the HBA, to split the load on the SAS chips.

5) Get a second HBA for more ports (at least two more ports) if you have another slot available to do it. With this you can have twin links to all three shelves, and access all three of them at max bandwidth. Consistency plus max bandwidth to all three.

There is nothing wrong with daisy chain though, and I doubt you’ll notice the speed hit in real life use (option 4).

6) Buy bigger drives, live with two shelves.
 
Last edited:
  • Like
Reactions: Fiberton

Fiberton

New Member
Jun 19, 2022
17
1
3
Other than being a painfully long list of all your accumulated hardware for some reason, I’m not sure what you are asking. Is it what cables to use?

8088 ends to the shelves, other end to whatever it is that your HBA wants. Probably 8644 ends.

You have SAS drives so you probably want two cables per shelf to fully utilize those, meaning that a quad cable HBA can talk to only two shelves directly, using twin links. Three shelves presents some choices. To run three you’ll need to either:

1) daisy chain shelf to shelf (8088 to 8088) to get the third shelf but you will lose some bandwidth on the chained shelves. 4 x 8088-8644 plus 2 x 8088-8088 daisy chain cables. This is probably the best way, but it compromises performance a bit. Only if /when you are accessing all 30 drives simthe chained shelves (probably never happen).

2) Try to live with one cable per shelf on two shelves, and thus the SAS drives on the single cable shelves become single link (like SATA): 4 x 8088-8644, one cable per shelf for two shelves, twin links on one. But this may throw error events that don’t really mean much.

3) Live with single links to all three: 3 x 8088-8644. Might still throw meaningless hardware errors, but also all three drive shelves are in harmony; all have the same connections.

4) direct link one and daisy chain two: 2x 8644-8088, 4x 8088-8088. No errors, but more reduced bandwidth. This may actually prove to the best way though, would need to test them to know, but all have twin kinks and all are daisy-chained. Consistency. Use every other port on HBA to split the load on SAS chips.

5) Get a second HBA for more ports (at least two more ports) if you have another slot available to do it. With this you can have twin links to all three shelves, and access all three of them at max bandwidth. Consistency plus max bandwidth to all three.

There is nothing wrong with daisy chain though, and I doubt you’ll notice the speed hit in real life use (option 4).

6) Buy bigger drives, live with two shelves.
 

Fiberton

New Member
Jun 19, 2022
17
1
3
I concur with everything you said. Only one change to what he wants to do I would not use TrueNAS with it unless it is version 22.12.1. In 22.12.2 IX systems added enclosure software for their Enclosures which makes the EMC arrays randomly hit 100% fan speed. I asked about getting it figured out by them and was basically told to buzz off. Other option is using Proxmox and installing TrueNAS on a VM. Then connect the drives to TrueNAS through Proxmox. What that does is exclude TrueNAS ability to talk to the enclosure itself through the HBA. Although may be harder for someone who has never used such things. There is a limitation on how many drives you can do this with. What you do is add the onboard HBA on the R730 via hardware passthrough to the TrueNAS VM. That will pull all the drives inside the R730 into VM. The Flexbay with the two SSds in the rear you just buy a HBA to plug into the rear flexbay and use that as the Proxmox drives. Which would be seperate and outside of the HBA onboard controller. This might be far to much stuff for a newer type person to do. Going with TrueNAS 22.12.1 would be a better route or just use Ubuntu. Unless you are ok with the Enclosures hitting 100% fan speed. If you do not care about that then you are golden.
 
Last edited:

BrassFox

New Member
Apr 23, 2023
19
5
3
If you want to know more about ZFS and how it works meaning nuts and bolts Ars Technica wrote an article about three years ago that was quite good. I will link it here. Understanding ZFS storage. That is a Big Gulp of information but to keep it simple for starters is you can simply use TrueNAS Scale if you also want to run containers and virtual machines aswell. Basically appliance software on your NAS. Far as getting started you can purchase something from IX systems or build something of your own. One of the most simple ways to me is just purchasing a older Dell Poweredge. Something like the R730 or R730XD with V4 processors like 2650 V4 which are 90watt 10 cores and a HBA330 controller inside of it. You can keep it simple and just use a old machine of yours and put some drives in it. You can use something like labgopher to find older Poweredge deals. It gathers all the ebay data and presents it in a way you can filter out what you want. Just some ideas. I personally just use a Poweredge. keeps it simple.
I am slowly taking the dive down the ZFS rabbit hole. After looking carefully at the various options/leads that you gave here, I wound up deciding to build my own from scratch, a seemingly rare beast: a 10th gen Xeon with QuickSync. I wanted ECC plus QuickSync for ZFS and Plex on the same system, and managed to find that without breaking the bank. I hunted down a nice SuperMicro motherboard that can do both and the corresponding 10-core W1290P Xeon cpu too, still waiting on the CPU and some other parts to arrive before starting the build. Which will be the easy part. Then comes my big (OS/software) rabbit hole: breaking my Windows/stablebit habit by attempting to set up a linux system with ZFS.

Even if I give up on that scheme and run back to Windows in frustration from the unknown, that hardware should be a big upgrade, for Plex, anyway. Otherwise it’s actually a slightly weaker system vs. my current Ryzen + ECC +NVidia GPU rig. But I am throwing Blue Iris into this mix too so may need to run two systems, depending upon how it all goes. Anyway- will be looking hard at your ZFS advice again and giving that a go, soon. Thx again
 
Last edited:

BrassFox

New Member
Apr 23, 2023
19
5
3
I concur with everything you said. Only one change to what he wants to do I would not use TrueNAS with it unless it is version 22.12.1. In 22.12.2 IX systems added enclosure software for their Enclosures which makes the EMC arrays randomly hit 100% fan speed. I asked about getting it figured out by them and was basically told to buzz off. Other option is using Proxmox and installing TrueNAS on a VM. Then connect the drives to TrueNAS through Proxmox. What that does is exclude TrueNAS ability to talk to the enclosure itself through the HBA. Although may be harder for someone who has never used such things. There is a limitation on how many drives you can do this with. What you do is add the onboard HBA on the R730 via hardware passthrough to the TrueNAS VM. That will pull all the drives inside the R730 into VM. The Flexbay with the two SSds in the rear you just buy a HBA to plug into the rear flexbay and use that as the Proxmox drives. Which would be seperate and outside of the HBA onboard controller. This might be far to much stuff for a newer type person to do. Going with TrueNAS 22.12.1 would be a better route or just use Ubuntu.
This is yet more useful information from you, thank you.

Hey to the original guy, with the questions:
forget my smartass remarks about the long proud hardware list. If you hadn't, somoeone probably would’ve asked you for that anyway
 

BrassFox

New Member
Apr 23, 2023
19
5
3
…or just use Ubuntu.
After digesting all of that, I think this (I will try with Ubuntu) before (hopefully not) foregoing ZFS and running back to Windows. I don’t need yet another endless tinker, and those Evil Machine Company fans that could each power a vacuum cleaner are already pissing me off.

That was my last and latest endless tinker (the fans) and thus far, the damn fans are winning. Tried the latest PWM PSUs (slightly better, but still far too loud) and have also tried various hotwires: several buck converters, resistors, pots… plus attempted to access the fan controller itself directly via serial bus comm, soldering my leads to the pins for that little service access port out the side of the PSUs… and none of this worked. To my great dismay. The serial port would probably work if I could get my hands on some EMC service tech apps, but alas (I tried, they laughed)
 
Last edited:

Towerjockey

New Member
Feb 5, 2024
4
0
1
No biggie I was trying to be through as to provide as much info so people could be inform and my question wouldn’t be so vague without context as to what system things are being plugged into etc. my HBA accepts SFF-8088 cable. Since I have no plan to use the Qualcomm 2 x Channel Fiber Controller, I can grab me another LSI 9201-16e as they aren’t to expensive and run port 1 on card 1 to array 1, port 3 on card 1 to array 2, port 1 on card 2 to array 3.

now I’m a little confused as does TrueNas Scale take advantage of the dual link topology or should I stay with just the single Link to each array and do I actually complete a loop when I do either the single or dual link set up?
 

BrassFox

New Member
Apr 23, 2023
19
5
3
In 22.12.2 IX systems added enclosure software for their Enclosures which makes the EMC arrays randomly hit 100% fan speed.
This behavior likely contains a big fat clue as to how the Evil Machine Company controls those fans in their VNX controller systems, via the SAS cable link. I’m not attempting to chase down that clue myself, but maybe someone (with the requisite coding skill) will see this, and give it a go. Or maybe we could start a pool to fund the hiring of a software nerd to solve this damn fan thing
 
Last edited:

BrassFox

New Member
Apr 23, 2023
19
5
3
No biggie I was trying to be through as to provide as much info so people could be inform and my question wouldn’t be so vague without context as to what system things are being plugged into etc. my HBA accepts SFF-8088 cable. Since I have no plan to use the Qualcomm 2 x Channel Fiber Controller, I can grab me another LSI 9201-16e as they aren’t to expensive and run port 1 on card 1 to array 1, port 3 on card 1 to array 2, port 1 on card 2 to array 3.

now I’m a little confused as does TrueNas Scale take advantage of the dual link topology or should I stay with just the single Link to each array and do I actually complete a loop when I do either the single or dual link set up?
If you want to take full advantage of owning SAS drives then you need both SAS channels on each shelf, so six 8088-8088 cables: ports 1+2 to shelf 1, ports 3+4 to shelf 2, and so on.

If you don’t care about the second SAS link then run all three shelves off your existing card, using three cables. Nothing wrong with this.

If you want the double link on each (I would) then you do need another card, and may want to make the jump to an LSI 93xx card. I am very familiar with my 9206-16e cards, and i have learned that that more functions exist in LSI Megaraid which the 92xx generation cards cannot do. I recently bought a 9361-8 to fiddle with and find out what I’ve been missing, but have not done it yet so I don’t have a complete answer. It may be nothing of any big value, but hard to say. This is a weird off-use case so there is not much info available, the best of which I’ve been able to find is all right here in this thread. But not much about Megaraid, I dont think many people are messing with that.

Are you stuck with PCIe 2.0 on that Dell? The LSI 93 card I bought was cheap (like 40 bucks) because it only has two cable channel sets, vs the usual four. But in your case with the three shelves, may be perfect if you have PCIe 3.0. And if it does bring some new added capability, then just replace your 92xx-16 later with a 93xx-16 and get the new goodness for all three shelves.
 

BrassFox

New Member
Apr 23, 2023
19
5
3
now I’m a little confused as does TrueNas Scale take advantage of the dual link topology or should I stay with just the single Link to each array and do I actually complete a loop when I do either the single or dual link set up?
I have no direct Linux experience to offer you and most people are using SATA drives, so you may be venturing into new territory with those SAS drives. But my gathered wisdom from others is that the dual link just works in Linux, also works in Windows Server with the right driver, but it was purposely nerfed in Windows 10. However it can be setup to work in W10, but you will lose that mod again at every boot. Meaning that you would need to re-do the dual link mid after every reboot, while ALSO religiously remembering to disconnect all the drives first, before rebooting. Because each drive will initially present to windows 10 as two drives. If you let it do that, then it’ll likely **** up all your data in short order. For that reason, I haven't tried it and since I’m already invested in SATA drives, likely never will.

Read up on “multipath drivers” and you’ll find what I found. If strictly Linux then it should work okay, as I understood things. No promises though.

I have three EMC shelves too, by the way. I could run all three off my HBA card but my solution was to run the third shelf on a second machine. My third is a backup pool that mostly stays off. So that makes a lot of sense, for me. I don’t need to run all three at once and really, can run all my needed stuff off one shelf if I jam it full.
 
Last edited:

Fiberton

New Member
Jun 19, 2022
17
1
3
I am slowly taking the dive down the ZFS rabbit hole. After looking carefully at the various options/leads that you gave here, I wound up deciding to build my own from scratch, a seemingly rare beast: a 10th gen Xeon with QuickSync. I wanted ECC plus QuickSync for ZFS and Plex on the same system, and managed to find that without breaking the bank. I hunted down a nice SuperMicro motherboard that can do both and the corresponding 10-core W1290P Xeon cpu too, still waiting on the CPU and some other parts to arrive before starting the build. Which will be the easy part. Then comes my big (OS/software) rabbit hole: breaking my Windows/stablebit habit by attempting to set up a linux system with ZFS.

Even if I give up on that scheme and run back to Windows in frustration from the unknown, that hardware should be a big upgrade, for Plex, anyway. Otherwise it’s actually a slightly weaker system vs. my current Ryzen + ECC +NVidia GPU rig. But I am throwing Blue Iris into this mix too so may need to run two systems, depending upon how it all goes. Anyway- will be looking hard at tour ZFS advice again and giving that a go, soon. Thx again
For plex GPU a Tesla P4 is a cheap card that does not have limited streams. You can add cooling very cheaply. About advice no worries.
 

Fiberton

New Member
Jun 19, 2022
17
1
3
After digesting all of that, I think this (I will try with Ubuntu) before (hopefully not) foregoing ZFS and running back to Windows. I don’t need yet another endless tinker, and those Evil Machine Company fans that could each power a vacuum cleaner are already pissing me off.

That was my last and latest endless tinker (the fans) and thus far, the damn fans are winning. Tried the latest PWM PSUs (slightly better, but still far too loud) and have also tried various hotwires: several buck converters, resistors, pots… plus attempted to access the fan controller itself directly via serial bus comm, soldering my leads to the pins for that little service access port out the side of the PSUs… and none of this worked. To my great dismay. The serial port would probably work if I could get my hands on sone EMC service tech apps, but alas (I tried, they laughed)

I am about to try my latest bright idea: a “PWM fan spoofer” on the newest PSUs. It’s a gizmo that’s meant for crypto-miners, and sends back a tach signal speed to match the fans’ call signal. If that magically works and also prevents the shelves from shutting themselves down an hour later (although at this point, I anticipate it will not work) then I would wire and run the EMC food processor fans from the motherboard directly, controlling and hushing them from there.

If it doesn't work— and I fear that the EMC PSUs are monitoring each fan’s current draw too— then I’m done with them. I will wheel the entire damn rig into the garage, where it can make all the noise that it wants to, while bothering nobody. Still would not be happy about its needless extra power use, but at least it’ll do double duty as a room-sized air filter, as it does now. I’ve rigged up HVAC filters as intake air filters, after making the rack be airtight. Not that I really need a 200 watt air cleaner running in my garage, or anyplace else, but I can convince myself to pretend that I do.

Those damn fans really have been an endless tinker. If the pwm spoofer actually works, I’ll let you know what I did here. It might help you solve your own recent issues.
Will be interesting to see what you come up with.
 

BrassFox

New Member
Apr 23, 2023
19
5
3
For plex GPU a Tesla P4 is a cheap card that does not have limited streams. You can add cooling very cheaply. About advice no worries.
Yeah I know.
Right now my Plex rig runs an RTX 3060, which I chose because I had one handy but also it is well-suited, since that one model GPU oddly comes with a silly amount of VRAM. But mostly it is entirely under-utilized and just eats up power.

Nvidia has opened up max transcodes recently, and there is a hack for that too, but that is not my problem anyway. I don’t have anyone outside the house using my Plex, and am not (nor do I have designs to become) one of these dudes selling plex server access, trying to be a mini-Netflix. Not outside of my own house, anyway. So I don’t need to transcode a dozen Plex streams.

Blue Iris is a different matter though. For Plex, I do want to ditch the GPU and to have that efficient QuickSync goodness, man. But also the Blue Iris (cam software) reportedly very much needs to have that QS to run well. I have an old i5-7400 growing dust that can transcode for Plex beautifully, but no ECC is RAM available for that rig due to Intel’s market-parsing villainy. Hence my current Ryzen ECC + GPU Plex rig. Only reason I am making changes to it is Blue Iris, and ZFS. Maybe.

So likely my plan will be to run ZFS and Plex off the 10th Gen Xeon with ECC and QS, and as I add Blue Iris and then camera load to that, will see how it goes. Might end up offloading Plex or Blue Iris to the i5 if all of them seems like too much for the one Xeon to do, while keeping the storage mated to ECC machine. That’s my tentative plan. I can also run a Ryzen (I have a bunch, including a very capable threadripper) for ECC storage only, and run the others on the Xeon. I have no feel for how much processing power ZFS really needs yet, but thinking probably not Threadripper grade power.

I may yet decide ZFS is too complicated for me to learn well and stick with Windows, an entirely possible outcome. Not to say that I think myself incapable of learning Linux/ZFS/whatever else, but this is a hobby and I still need to work… there are only so many hours in a day.

Also since the SuperMicro board I picked out has IPMI, I should be able to forego installing a GPU in it entirely. I think. Looking to calm down my 24/7 electricity usage. Starting with those damn fans.
 
Last edited:

Fiberton

New Member
Jun 19, 2022
17
1
3
No biggie I was trying to be through as to provide as much info so people could be inform and my question wouldn’t be so vague without context as to what system things are being plugged into etc. my HBA accepts SFF-8088 cable. Since I have no plan to use the Qualcomm 2 x Channel Fiber Controller, I can grab me another LSI 9201-16e as they aren’t to expensive and run port 1 on card 1 to array 1, port 3 on card 1 to array 2, port 1 on card 2 to array 3.

now I’m a little confused as does TrueNas Scale take advantage of the dual link topology or should I stay with just the single Link to each array and do I actually complete a loop when I do either the single or dual link set up?
TrueNAS scale does take advantage of it. I would use two cables per Enclosure. . If you have 3 enclosures and want max bandwith buy another card. With a 16e card 2 ports are one core the other 2 ports are are another core. If you criss cross the ports I have found it to have lower latency. Card = cport1, cport2, cport3, cport4 ||| 2 Enclosers Enc1port1 En1port2 Enc2port1 Enc2port2 . (cport1 to Enc1port1) (cport3 to Enc1port2) / (cport2 to Enc2port1) (cport4 to Enc2port1) For daisy chaining remember the inner ports are input the outer ports are output. Far as Sas drives I only use SAS drives as they read and write way faster They last a lot longer. Sata drives are cheaper and the guts of the drives tend to be cheaper. Used Sas drives make far more sense and are helium filled. HUH721010AL4200/42C0 HGST are 10TB helium filled HGST drives I own 90 of these. You can also reflash with the newest flash from hddguru website. I see a listing for 70.00 a drive on ebay with free shipping.