EMC KTN-STL3 15 bay chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Koop

Well-Known Member
Jan 24, 2024
378
285
63
I'm not quite understanding the need for daisy chaining. Couldn't I just use a dual port card, connecting each to the top controller of each shelf and keep them separate?
Listen bucko that's how they told us to deploy it 10+ years ago and dang nabbit you're gonna do the same.

It's supposed to daisy chain head units. I believe these shelves may have been used for multiple EMC products but the one I know is the VNX.

Like this. Big beefy boys at the bottom are SPA and SPB for a VNX. Well at the very bottom are batteries to safely shut down the SAN in case of a power event. Don't ask about the time I worked for a company that made me "refurbishing" these batteries because that definitly never happened and I of course would never ship such a ticking time bomb to a customer.

Everything with a number would be your disk shelves. If cabled properly you should see the shelves in sequential order from bottom to top. We'd boot up SPA and SPB then daisy chain up while all shelves were powered on. They would grab the numbers sequentially. If I call you could actually cable them in any order if you were a madman or really bad at maintaining any order in the world. When not connecting to a head unit like a VNX though you got me there.... I would assume they just work as a JBOD or something. Probably don't display numbers or all 0s or something. You silly homelab users!

1709332670140.png

 
Last edited:

Koop

Well-Known Member
Jan 24, 2024
378
285
63
Did some digging in old files... Not sure how applicable it is to variations of the shelves over time.

1717144591902.png

The VNX storage capacity can be expanded with three DAE options. The first option is a 2U DAE
which can contain up to 25, 2.5 inch, 6 Gb SAS drives. The second option is a 3U DAE which can
contain up to 15, 3.5 inch, 6 Gb SAS drives. Both of these DAEs may be installed in the same
array and both offer high-efficiency power supplies, drive spin down, adaptive cooling, and
ambient temperature reporting. Each DAE can contain a mixture of all drive types (Flash, SAS,
and NL SAS).
The Disk Array Enclosures (DAE) are highly available, high performance, storage system
components that communicate with the disk drives via a 6 Gb Serial Attached SCSI (SAS)
interconnect interface. A DAE connects to another DAE or an SPE and is managed by the
storage system software.

The third DAE option is a 4U DAE which can contain up to 60, 2.5-inch or 3.5-inch 6 Gb SAS
drives. This DAE, like the others, offers high-efficiency power supplies, drive spin down,
adaptive cooling, and ambient temperature reporting. Each DAE can contain a mixture of all
drive types (Flash, SAS, and NL SAS).
The Disk Array Enclosures (DAE) are highly available, high performance, storage-system
components that communicate with the disk drives via a 6 Gb Serial Attached SCSI (SAS)
interconnect interface. A DAE connects to another DAE or an SPE and is managed by the
storage system software.
Note that both the 15- and 25-drive DPEs can be used with the 4U 60-drive DAE. However, 25-
drive DAEs are not supported.

1717144620541.png
1717144720083.png
1717144793058.png
1717144884247.png

The rear view of a 15 DAE contain two LCC cards and two redundant power supply/cooling
modules. Both the LCC’s and PSU’s are specific for the 3U, 15” disk drive DAE. Power
Supply/Cooling Module A and LCC A are located on the bottom and Power Supply/Cooling
Module B and LCC B are located on the top.
The LCCs and Power Supplies are locked into place using captive screws to ensure proper
connection to the midplane. All DAE FRUs are hot swappable but precautions must be taken to
ensure non disruptive operation. If a Power Supply is removed from the enclosure, the
enclosure will shutdown after two minutes. Be sure to download and run the Procedure
Generator before removing or installing any components.
Shown in this slide is the rear view of a VNX5100 15-drive DAE.

On each of the LCCs, an LCC Enclosure ID is provided. This is a seven-segment LED decimal
number display. The LCC Enclosure ID appears on both LCCs (A and B) within an enclosure and
should always display the same Enclosure ID. The Enclosure ID is set during system boot. The
LCCs also have Power and Fault LEDs.
Each LCC includes a Bus (Loop) ID as well. This indicator includes two seven-segment LED
decimal number displays. The SP initializes the Bus ID when the operating system is loaded. The
LCC in a DAE connects to the Storage Processor and the other DAEs with twin-axial copper
cables in a daisy-chain (loop) topology. The LCC cable from the previous DAE LCC A or Storage
Processor A back-end SAS port is plugged into the LCC A input which is marked with double
circles. The LCC cable going to the next DAE LCC A in the same loop is plugged into the LCC A
output marked with double diamonds. The same is true for the LCC B input and output. All
back-end bus cables are keyed to prevent incorrect cabling.
Note that the DAE SPS Monitor jacks are not used by the VNX platform.

1717144932563.png

Shown here is a graphical representation of the SAS cabling in an SPE based VNX storage
system. The Storage Processors connect to the DAEs with twin-axial copper cables. The cables
connect LCCs in a storage system together in a daisy-chain or loop topology. The first DAE
connected to the Storage Processor output SAS Port will be designated Enclosure 0. Each DAE
connected after the first DAE will increment the enclosure number by one. All enclosures
connected to SAS Port 0 will display a Loop ID of 0.
Each LCC independently monitors the environmental status of the entire enclosure, using a FRU
monitor program. The monitor communicates status to the Storage Processor, which polls disk
enclosure status. Internally, each DAE LCC connects to drives in its enclosure in a point-to-point
fashion through a switch. For traffic from the system’s Storage Processors, the LCC switch
passes the input signal from the input port to the drive being accessed; the switch then
forwards the drive’s output signal to the output port, where cables connect it to the next DAE
in the loop. If the target drive is not in the LCC’s enclosure, the switch passes the input signal
directly to the output port. At the unconnected output port of the last LCC, the output signal
from the storage processor is looped back to the storage processor. LCC firmware also controls
the LCC port-bypass circuits and the disk-module status LEDs. You can add or replace an LCC
while the disk enclosure is powered up. A 6 Gb SAS I/O Module replacement requires an SP
shutdown.

1717144960894.png

Cabling between the DPE and optional DAEs uses SAS cables. The connectors on SAS cables
have icons indicating each end.
The VNX 5500, 5300, and 5100 SAS ports on the DPE are labeled 0 and 1. Port 0 is connected
internally to the SAS expander that connects all the internal DPE disks. Since Port 0 is already
connected internally to the DPE disks, the first DAE is connected to Port 1 to balance the load
on the SAS ports. The second DAE is connected to Port 0; the third DAE is connected to SAS1,
and so forth

1717144995856.png

Cabling between the SPE and DAEs uses SAS cables. The connectors on SAS cable connectors
have icons indicating each end.
For the VNX 7500 and VNX 5700 model the I/O module in slot 0 of each SP has SAS ports
labeled 0 and 1.
• SP A slot 0 Port 0 is connected to the SAS expander on DAE 0 LCC A.
• SP A slot 0 Port 1 is connected to the SAS expander on DAE 1 LCC A.
• SP B slot 0 Port 0 is connected to the SAS expander on DAE 0 LCC B.
• SP B slot 0 Port 1 is connected to the SAS expander on DAE 1 LCC B.



Probably a whole lot of meaningless info not really needed but hey, why not. Apologies if this stuff has already been shared.

VNX2 series documentation:

For the official record file systems on VNX can blow me. Ask me about Isilon.

1717145435539.jpeg1717145440144.jpeg
 

Attachments

  • Wow
Reactions: lightsword

BrassFox

Member
Apr 23, 2023
35
12
8
Probably a whole lot of meaningless info not really needed
Yeah pretty much that's about it. Although it may be an interesting history lesson for anyone who hasn't already seen it. I've been down that rabbit hole already, and even tried pretty hard to follow the instructions and set up the "VNX Emulator" but I had zero luck. Dead links, no help from Dell, if EMC is even available to ask I doubt they would help either. I recommend forgetting about all this before you waste a hundred hours like I have, and give up on trying to use these as they were originally intended. Just use them as described in here: as dumb disk shelves.

The main point of it all: without the VNX Head Units (whatever those might be) to control the shelves, plus the working EMC software (that's the kicker) spread across five drives in the VNX, which they will not replace even if you bought it.... it will not work.

Even if you do find one for sale intact, it is old and crappy tech in the processors anyway that will only work with the original software drives intact as they were sold, which are registered only to the original buyer, who paid a kidney to get them. You almost certainly will not get those in an intact condition and all of it is unsupported now anyway. So if by some miracle you got it all working, who knows if some kid in a basement is perusing all your stuff using some old software hole? You won't know it. Meanwhile if the seller actually gave you his five drives intact, a swat team will be dispatched from the EMC to come get you and get him too, because supposedly any transfer of ownership is illegal. They don't call EMC the Evil Machine Company for nothing.

Just use them as dumb disk shelves, attached to an LSI card, and run the rig with windows or linux however you want. They are great for that, but no other use including the originally-intended use is actually available. They are heavy and dumb disk shelves, with loud fans and lots of blinky lights, and work just fine for that. It sure would be nice if EMC would make up and let out a set of drivers that let you see and control a few things, but alas. They're Evil. I may have finally found a successful a hack for controlling the fans, though. I am still testing it
 

Stephan

Well-Known Member
Apr 21, 2017
1,047
808
113
Germany
Are you saying these do not work with SAS enclosure services? SCSI Enclosure Services - Wikipedia Tried it? Anything from Xyratex like Netapp shelves or the occasional HP 3Par shelf work very nicely. No need for anything proprietary, can do multipath and ZFS and eat the cake too.
 

BrassFox

Member
Apr 23, 2023
35
12
8
I don't know anything about most of that. My experience with these is limited to the KTN-STL3 in dum dumb mode, and they are fine. What I am saying is that they will appear to the host server as a big list of individual hard drives. From that point, what you can do with them, or what you will do with them, is pretty much up to you. But I do not think you will ever get the original VNX "data mover features" and stuff as was intended for these by Evil Machines, nor do I think you even want that. But maybe it is awesome. Last time I looked for one I found exactly one for sale, it was a whole set of several shelves with (not great) drives and a VNX. Dude wanted $35k for the rig and he was in Great Britain. That's a long expensive boat ride for some old and expensive unsuported storage tech.
There is a pretty knowledgeable dude here who runs ZFS storage off these shelves and speaks very highly of them in that application. I want to try that for myself but have not, yet. Look for his posts a couple pages up. "FIlb...." -something is his name on here.
Multipath is hit or miss, reportedly. I have a mix of SAS and SATA right now, so I have not tried to run multipath. I would try multipath if I had all SAS drives, or maybe if all SAS on one shelf. I'm not really after speed, only stability, and they are fine for me as single path.
 

Koop

Well-Known Member
Jan 24, 2024
378
285
63
"VNX Emulator"
Oh yeah don't even bother.

if EMC is even available to ask I doubt they would help either.
Completely gone. Full Borg.

give up on trying to use these as they were originally intended. Just use them as described in here: as dumb disk shelves.
Yeah completely not worth it if you could figure it out. They do still make the product line though, it's Dell Unity now.

without the VNX Head Units (whatever those might be)
SPA/SPB - Storage Processor A and Storage Processor B. I mean... Maybe you could build one if you really tried? At least for these shelves to be "period accurate" haha.

EMC software (that's the kicker)
Unisphere is the software platform. History lesson. Product line started with Clariion made by Data General in the 90s. Acquired by EMC. EMC makes the Celerra for NAS storage. EMC later comes out with the VNX as a "unified storage platform" basically combining Clariion and Celerra into one product. The file part of the product always sucked ass so they buy Isilon Systems out of Seattle to sell the vastly superior scale-out NAS platform, Isilon. Dell buys EMC. The VNX eventually becomes the Dell EMC Unity product line. Nowadays it's the Unity XT as Dell took all EMC product solutions and made them all work on Poweredge hardware basically.

Even if you do find one for sale intact, it is old and crappy tech in the processors anyway that will only work with the original software drives intact as they were sold, which are registered only to the original buyer, who paid a kidney to get them. You almost certainly will not get those in an intact condition and all of it is unsupported now anyway.
Wish you could've backed me up on the stupid facebook group when I tried to explain to a guy that the VNXe sucks and should be thrown in the trash. He showed me the 4 sold 2TB SAS drives and said I am wrong... Then he blocked me because he didn't like my opinion.

****ing hate the VNXe. Shit product even when they made it. Wasted so many hours with that dogshit java UI and licensing nightmare.

a swat team will be dispatched from the EMC to come get you and get him too, because supposedly any transfer of ownership is illegal. They don't call EMC the Evil Machine Company for nothing.
lmao hey leave EMC alone they DEAD JIM. DEAD.:( - Also don't forget the best internal EMC slogan - "Everything Must Change" ... Why? Because why keep a good thing going. Don't forget EMC owned VMWare at one point. Thanks a lot Dell.

It sure would be nice if EMC would make up and let out a set of drivers that let you see and control a few things, but alas. They're Evil.

I may have finally found a successful a hack for controlling the fans, though. I am still testing it
Nice. Obviously for the record I just was sharing info in case... It somehow would help I dunno. Also just because I have it and it's fun to get nostalgic.

It's all good. Let's grab some DataONTAP. What we drinking these days? Full Sail... Guinnes... Oh wait these are internal code names I didn't say nothin' lol

Do not buy a fully intact VNX. It's old as hell. If anything look at however Dell is running Unisphere OS these days. Pretty sure it's just SUSE.
 
  • Like
Reactions: Stephan

BrassFox

Member
Apr 23, 2023
35
12
8
Thanks for the compliment of (I think) mistaking me for an old IT guy, but I'm an old builder guy (as in: I build buildings). With this stuff I am merely a hobbyist, playing with old tech that is now affordable enough. I pieced together what I do know (read: enough to be dangerous) from spending far too much time reading about it. I never even heard of a KTN-STL3 before two years ago, now I have three of them (two actively running) and they've been pretty flawless for me. My main motivation to learn about EMC VNX was born from a desire to chill out those damn PSU fans. These would be perfect, if not for 100+ watts of AVC blower fans per shelf, running 24/7, which aren't really needed. Not running that hard, I mean.

I had thought VNX = Unisphere, and the latter term was just what Dell renamed it after they bought the evil. Maybe I need to go read more, but really for me it is all about those fans anyway. I'm not trying to run their coveted "block level storage" and especially do not want to be married to any proprietary storage platform. As they are, in dummy mode, most any OS can use these shelves.

After trying to install smaller blowers (didn't work, it just knows, and errors out) and also wiring in custom adjustable voltage regulators/buck converters (doesn't work, it just knows) and also PWM modulators (it just knows, again....) I will be sticking to my EMC = Evil. Because, why?? There is what looks to be a serial interface in there, that I could tap into, but without the right software I can't do much with it. I asked Dell support about that once, learning only that they somehow can express surely disapproving looks via email. That was a good trick. It would seem that they bought the evil, with the EMC. It seems to me that the EMC power supply logic boards have both PWM/tach speed sensors and also current draw sensors, and will detect any of the usual fan shenanigans with one and in response, either shut off after an hour or two, or it cranks up all fans to full blast: 2.4 Amps x four blowers, per shelf. Sounds like a plane taking off, while mocking my sophmoric hack attempts. My latest efforts are about spoofing all of those sensors.

Heineken for me bro
 

Koop

Well-Known Member
Jan 24, 2024
378
285
63
Thanks for the compliment of (I think) mistaking me for an old IT guy
Well I suppose I am the old IT guy here. And compliments all around really.

All joking and kidding around, really. Just throwing random info out there. I am digging through old files I have and seeing if I can find any relevant nuggets of information that can help control the fans on these disk shelves... Will see what I can find.

EMC being called Evil Machine Company is a oldie and a goodie. I heard that one for years haha.

Also as far as being an old builder guy- nothing but respect. My father was an electrician and carpenter. Told me I should go learn computers though... So one career later and here I am guess.

I had thought VNX = Unisphere, and the latter term was just what Dell renamed it after they bought the evil. Maybe I need to go read more, but really for me it is all about those fans anyway. I'm not trying to run their coveted "block level storage" and especially do not want to be married to any proprietary storage platform. As they are, in dummy mode, most any OS can use these shelves.
Haha no no, no need to read up I was just sharing. I personally went through the whole history first hand and was just sharing.

especially do not want to be married to any proprietary storage platform.
Agreed!

I asked Dell support about that once
It'd take quite a bit of effort going up the chain of Dell support before you found anyone who had a clue honestly.

There is what looks to be a serial interface in there, that I could tap into
Ahh this is perhaps the service port? I am vaguely recalling this... There was always a back door/service port on the storage processors... But the disk shelves? I don't think so?... Hmm...

Heineken for me bro
NetApp is a storage company that was some of the biggest competition for EMC. Their code was/is called Data ONTAP. It's their proprietary OS. They name all the internal code updates after beers. I like Kirin Ichiban :D



If someone somehow finds an old fully intact Symmetric array give me a call, I can help... I still have my notes. Let me know what the power bill looks like lol.
 
Last edited:

Koop

Well-Known Member
Jan 24, 2024
378
285
63
Hmm used to use Unisphere Service Manager. That was the software tool me used to do a ton of stuff... Including firmware updates to the LCCs/enclosures... But you needed a full VNX setup to log into using USM. Hmm... Same thing for accessing the service LAN connection on the Storage Processors... Absolutely had a CLI tool we used to check DAE environmentals... DAE fans monitored and controlled by the SPs BMC...



uemcli -d -u Local/admin -p /env/fan show -detail
uemcli /env/dae show -detail


Using Ipmitool via service port


WEEE DISASSEMBLY INSTRUCTIONS for the KTN-STL3, KTN-STL4, and KTN-STLDCT Disk Array Enclosure

Ok enough digging for now, time for bed. Not sure if any of this crap is helpful but hey... Why not.
 

BrassFox

Member
Apr 23, 2023
35
12
8
Well I suppose I am the old IT guy here.
And to think that I was trying to tell you how it is.... Duh me.

I am digging through old files I have and seeing if I can find any relevant nuggets of information that can help control the fans on these disk shelves...
Dude you would be my hero if you found something that worked. I've spent a bazillion hours looking for a way to access fan control. Evil bastards.

Hmm used to use Unisphere Service Manager. That was the software tool me used to do a ton of stuff... Including firmware updates to the LCCs/enclosures... But you needed a full VNX setup to log into using USM. Hmm... Same thing for accessing the service LAN connection on the Storage Processors... Absolutely had a CLI tool we used to check DAE environmentals... DAE fans monitored and controlled by the SPs BMC..
Correct me if mistaken, but I think you are saying none of this stuff would actually work without the VNX attached, right?
I was trying to run the VNX or Unisphere emulators and then do something like this. My first attempt failed and while I have a bunch more versions of the emulators downloaded, I stopped trying after the first failed try because I saw something that made me believe it could never work. I don't remember what that was now though, it was awhile ago.

The service port I mentioned is on the side of each PSU. You cannot access it when slid into the shelf. But I could do a little soldering and run the wires out the back (similar to how I was rigging the fans) then through a serial to USB converter... which would all be pointless, since I have no way to talk to the damned thing.
 

BrassFox

Member
Apr 23, 2023
35
12
8
if we had a vnx unit, that was working, could we sniff the conversation?
That would be beyond my skills . I would not want to discourage anyone from trying, but this seems basically rhetorical since I have not seen any post here saying they have a VNX, and if If they did then the problem probably goes away for them, so why would they want to? Also from what I have seen of EMC I wonder if any of it would work, they seem to have set things up to actively detect and to thwart any tinkering.

I had high hopes for the SpeedFan guy who seems able to decipher any fan controller but his stuff won't even see those fans either. Maybe someone (perhaps myself) should contact him and present the challenge, he seems like the type that would take it on.

I have messed around with Avago MegaRaid Storage Manager. This one is a little difficult to get running (at least it is on Windows, which is what I am messing on currently) due to java requirements, but I figured it out and got it going. It is by far the easiest way (if the not only way) to see what disk interface speed was negotiated. I was surprised to learn that many SATA disks will negotiate to 1.5 rtather tan 3.0 or 6.0, especially if the shelf is full. That was my initial reason for getting that (to check speeds) but it has other capabilities/settings that seem to be borked on my server, seemingly because I have been using generation 92XX LSI cards. Specifically: I have several 9206-16e cards. I don't think the app is made for those.

However I recently grabbed a couple LSI 93XX gen cards, and maybe things will work on them. Not sure, and I haven't even plugged the in yet. Still need to re-paste, decide whether to make a new custom heatsink like I do for the 9206-16's, then probably mill a little home on whichever heatsink I use for a themistor as I have for before (it's very nice to just know what your HBA processor's temps are, and set up a fan curve for it too), fab on cooling fans (I normally rig up small GPU fans to the HBA's), and then finally re-flash their firmware with the latest stuff (never fun, nor easy).
After I get around to all that - I will give them a whirl. Perhaps AMRSM will talk to the fan controllers on them, but I doubt it.

I mean to make the lep into linux/TrueNAS/ZFS soon anyway, so depending on what order the events go, I might never find out whether this works. Probably does not work, anyway.
 

TechSSH

New Member
Jul 3, 2024
1
0
1
The 4 ports on the back are 2 that connect to the HBA ( Inner ) and 2 ( Outer ) that are used for daisy chaining. Unplug the diskshelf. Put the drives in the diskshelf, then plug the power in. Then boot the server up. Channel ID stuff will blink forever. Just ignore it. YOu need to be plugged into the circle not the diamond. Diamond is for daisy chaining.
I am new to this, and I am I just picked up an EMC KTN-STL3 (EMC Expansion Array Jbod Disk Array Shelf W/ 15x SAS SATA Trays * Great for CHIA | eBay) from eBay, and 2x LSI Server Expansion Array Card (LSI Server Expansion Array JBOD PCIe Dual Port SFF-8088 6Gbs SAS Card Chia Coin | eBay, ), and 4x Mini-SAS SFF-8088 to SFF-8088 cable (EMC Amphenol Mini-SAS SFF-8088 to SFF-8088 Molex 2 Meter Cable Black 038-003-787 | eBay ) to connect to my R730xd dell server.


I tested with a 4-terabyte SATA drive on tray one works, I connected KTN-STL3 (I believe it's on the bottom Inner circle port) with a single 8080 cable to the HBA card. Is this the way I am supposed to connect?

After reading this, I believe I buy too many cards and cable. So it sounds like only one cable on the bottom circle port is needed to connect? In the upper circle port don't need to connect my HBA card or just a redundancy purposes with no speed increase if connected? The total data throughput for the whole single chassis is 6Gbps?


Sorry I am new to this, not understand how multipath and speed increase or redundancy are supposed to work. Any explanation would be appreciated.
 

BusError

Member
Jul 17, 2024
36
6
8
(First post here!)

I also got one of these recently, around £200 with 15*3GB disks included! Didn't need it, therefore bough it :) -- I added a Dell 0T93GD HBA card to my R7910 server -- mostly because it looks like it is a 9300 but with a bigger heatsink than most.

Array works fine, I haven't managed to get multipath working tho. Well lsscsi lists the disks twice, but that seems to confuse mdraid quite a bit. The 'multipath' module and tools didn't work for me, Perhaps I should have started with these BEFORE I created the mdraid arrays.

Otherwise nice piece of kit. I only use it with one power cable and one SAS cable for the moment, I don't think I can saturate 6Gb/s with that setup anyway with my current config.
 

nexox

Well-Known Member
May 3, 2023
1,410
657
113
Perhaps I should have started with these BEFORE I created the mdraid arrays.
Yeah, you want to build the arrays out of the multipath devices, not the lower level /dev/sdq devices, and then, critically, you need to configure md to only reassemble the array using those devices. I eventually gave up on my experiments because I decided to switch to a chassis with built in hot swap bays, but before that I paused my multipath experiments because md kept claiming the sdX devices as soon as they were enumerated and then blocking dm-multipath from setting them up correctly and I didn't feel like messing with it in the brief time windows when I could run the loud array by my desk where I was playing with it.
 
  • Like
Reactions: BusError

StrafeBink

New Member
Aug 6, 2024
1
0
1
I've recently acquired two EMC KTN-STL3 drive shelves which I intend to connect to a mATX motherboard to run the maximum 30 drive limit on Unraid.

I'll be using SATA drives so intend to only run a single cable from each EMC drive shelf to a single LSI card on the Unraid machine.

The motherboard and case supports a low profile PCIe 4.0 x16 card.

Do I need a 12Gb LSI card as each EMC runs 6Gb? Looking for suggestions on a particular LSI 8e card for my use case as I've never used drive shelves (my existing Unraid servers use 8i and 16i cards to case backpanes).

Appreciate any suggestions on what model LSI card I should purchase.
 

nexox

Well-Known Member
May 3, 2023
1,410
657
113
Do I need a 12Gb LSI card as each EMC runs 6Gb?
Those rates are per lane, a 6G HBA with 8 ports is all you need to get full bandwidth, 24G per shelf, but at this point there's no reason not to just buy a newer 12G HBA, since the prices have dropped close enough. The LSI 9300-8e is probably the way to go, there may be OEM branded cards with the same controller available slightly cheaper but for $30 it might not be worth the effort of researching that. You will need to get cables with the correct ends to connect to 12G on one side and 6G on the other, but they're pretty much the same price as cables that do 6G on both ends.
 
  • Like
Reactions: StrafeBink

roberth58

Member
Nov 5, 2014
37
7
8
62
Florida
Unlike most I want to speed up the fans not slow them down. I have a KTN-STL3 full of 6tb rust. They are running at 44-45C. the drives in a SC846 are running 35-36C in the same rack. Is there a way to raise the speed of the fans apart from pulling one power supply so the other goes into full afterburner.